I0516 21:10:42.391172 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0516 21:10:42.391484 6 e2e.go:109] Starting e2e run "79c05dbd-8833-4311-8e5a-96d7c6c4a021" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589663441 - Will randomize all specs Will run 278 of 4842 specs May 16 21:10:42.443: INFO: >>> kubeConfig: /root/.kube/config May 16 21:10:42.448: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 16 21:10:42.476: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 16 21:10:42.517: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 16 21:10:42.517: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 16 21:10:42.517: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 16 21:10:42.530: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 16 21:10:42.530: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 16 21:10:42.530: INFO: e2e test version: v1.17.4 May 16 21:10:42.532: INFO: kube-apiserver version: v1.17.2 May 16 21:10:42.532: INFO: >>> kubeConfig: /root/.kube/config May 16 21:10:42.543: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:10:42.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected May 16 21:10:42.620: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-ac969adf-c83b-40bf-923f-a36232c011fb STEP: Creating a pod to test consume configMaps May 16 21:10:42.631: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-90d6346b-1df2-4b76-87ea-1dea6e35c2ee" in namespace "projected-9054" to be "success or failure" May 16 21:10:42.646: INFO: Pod "pod-projected-configmaps-90d6346b-1df2-4b76-87ea-1dea6e35c2ee": Phase="Pending", Reason="", readiness=false. Elapsed: 15.659469ms May 16 21:10:44.670: INFO: Pod "pod-projected-configmaps-90d6346b-1df2-4b76-87ea-1dea6e35c2ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038914075s May 16 21:10:46.673: INFO: Pod "pod-projected-configmaps-90d6346b-1df2-4b76-87ea-1dea6e35c2ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042638587s STEP: Saw pod success May 16 21:10:46.673: INFO: Pod "pod-projected-configmaps-90d6346b-1df2-4b76-87ea-1dea6e35c2ee" satisfied condition "success or failure" May 16 21:10:46.676: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-90d6346b-1df2-4b76-87ea-1dea6e35c2ee container projected-configmap-volume-test: STEP: delete the pod May 16 21:10:46.843: INFO: Waiting for pod pod-projected-configmaps-90d6346b-1df2-4b76-87ea-1dea6e35c2ee to disappear May 16 21:10:46.975: INFO: Pod pod-projected-configmaps-90d6346b-1df2-4b76-87ea-1dea6e35c2ee no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:10:46.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9054" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":12,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:10:46.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 16 21:10:47.903: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 16 21:10:50.143: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260247, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260247, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260247, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260247, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 21:10:52.147: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260247, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260247, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260247, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260247, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 21:10:55.176: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:10:55.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:10:56.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8315" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:9.563 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":2,"skipped":22,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:10:56.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-jqdc STEP: Creating a pod to test atomic-volume-subpath May 16 21:10:56.640: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jqdc" in namespace "subpath-8526" to be "success or failure" May 16 21:10:56.656: INFO: Pod "pod-subpath-test-configmap-jqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.414633ms May 16 21:10:58.661: INFO: Pod "pod-subpath-test-configmap-jqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021091545s May 16 21:11:00.665: INFO: Pod "pod-subpath-test-configmap-jqdc": Phase="Running", Reason="", readiness=true. Elapsed: 4.025311745s May 16 21:11:02.669: INFO: Pod "pod-subpath-test-configmap-jqdc": Phase="Running", Reason="", readiness=true. Elapsed: 6.029506364s May 16 21:11:04.674: INFO: Pod "pod-subpath-test-configmap-jqdc": Phase="Running", Reason="", readiness=true. Elapsed: 8.033887512s May 16 21:11:06.679: INFO: Pod "pod-subpath-test-configmap-jqdc": Phase="Running", Reason="", readiness=true. Elapsed: 10.038658585s May 16 21:11:08.684: INFO: Pod "pod-subpath-test-configmap-jqdc": Phase="Running", Reason="", readiness=true. Elapsed: 12.043598941s May 16 21:11:10.688: INFO: Pod "pod-subpath-test-configmap-jqdc": Phase="Running", Reason="", readiness=true. Elapsed: 14.048182386s May 16 21:11:12.693: INFO: Pod "pod-subpath-test-configmap-jqdc": Phase="Running", Reason="", readiness=true. Elapsed: 16.052661482s May 16 21:11:14.697: INFO: Pod "pod-subpath-test-configmap-jqdc": Phase="Running", Reason="", readiness=true. Elapsed: 18.057422304s May 16 21:11:16.702: INFO: Pod "pod-subpath-test-configmap-jqdc": Phase="Running", Reason="", readiness=true. Elapsed: 20.061907213s May 16 21:11:18.707: INFO: Pod "pod-subpath-test-configmap-jqdc": Phase="Running", Reason="", readiness=true. Elapsed: 22.066693038s May 16 21:11:20.711: INFO: Pod "pod-subpath-test-configmap-jqdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.070911108s STEP: Saw pod success May 16 21:11:20.711: INFO: Pod "pod-subpath-test-configmap-jqdc" satisfied condition "success or failure" May 16 21:11:20.714: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-jqdc container test-container-subpath-configmap-jqdc: STEP: delete the pod May 16 21:11:20.732: INFO: Waiting for pod pod-subpath-test-configmap-jqdc to disappear May 16 21:11:20.766: INFO: Pod pod-subpath-test-configmap-jqdc no longer exists STEP: Deleting pod pod-subpath-test-configmap-jqdc May 16 21:11:20.766: INFO: Deleting pod "pod-subpath-test-configmap-jqdc" in namespace "subpath-8526" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:11:20.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8526" for this suite. • [SLOW TEST:24.229 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":3,"skipped":32,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:11:20.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-2cc84b94-c89d-419d-acfb-fb31e11d835d STEP: Creating a pod to test consume secrets May 16 21:11:21.216: INFO: Waiting up to 5m0s for pod "pod-secrets-3706e7ba-2dd4-47ce-9ad0-b07fe1945228" in namespace "secrets-8469" to be "success or failure" May 16 21:11:21.222: INFO: Pod "pod-secrets-3706e7ba-2dd4-47ce-9ad0-b07fe1945228": Phase="Pending", Reason="", readiness=false. Elapsed: 5.675262ms May 16 21:11:23.227: INFO: Pod "pod-secrets-3706e7ba-2dd4-47ce-9ad0-b07fe1945228": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011590744s May 16 21:11:25.239: INFO: Pod "pod-secrets-3706e7ba-2dd4-47ce-9ad0-b07fe1945228": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022813089s STEP: Saw pod success May 16 21:11:25.239: INFO: Pod "pod-secrets-3706e7ba-2dd4-47ce-9ad0-b07fe1945228" satisfied condition "success or failure" May 16 21:11:25.242: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-3706e7ba-2dd4-47ce-9ad0-b07fe1945228 container secret-volume-test: STEP: delete the pod May 16 21:11:25.276: INFO: Waiting for pod pod-secrets-3706e7ba-2dd4-47ce-9ad0-b07fe1945228 to disappear May 16 21:11:25.287: INFO: Pod pod-secrets-3706e7ba-2dd4-47ce-9ad0-b07fe1945228 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:11:25.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8469" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":58,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:11:25.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 16 21:11:25.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9892' May 16 21:11:28.187: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 16 21:11:28.187: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 16 21:11:28.217: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-lc84t] May 16 21:11:28.217: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-lc84t" in namespace "kubectl-9892" to be "running and ready" May 16 21:11:28.240: INFO: Pod "e2e-test-httpd-rc-lc84t": Phase="Pending", Reason="", readiness=false. Elapsed: 22.925239ms May 16 21:11:30.244: INFO: Pod "e2e-test-httpd-rc-lc84t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02639405s May 16 21:11:32.248: INFO: Pod "e2e-test-httpd-rc-lc84t": Phase="Running", Reason="", readiness=true. Elapsed: 4.031107008s May 16 21:11:32.248: INFO: Pod "e2e-test-httpd-rc-lc84t" satisfied condition "running and ready" May 16 21:11:32.248: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-lc84t] May 16 21:11:32.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-9892' May 16 21:11:32.381: INFO: stderr: "" May 16 21:11:32.381: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.223. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.223. Set the 'ServerName' directive globally to suppress this message\n[Sat May 16 21:11:31.065559 2020] [mpm_event:notice] [pid 1:tid 140407500458856] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat May 16 21:11:31.065628 2020] [core:notice] [pid 1:tid 140407500458856] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 16 21:11:32.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9892' May 16 21:11:32.495: INFO: stderr: "" May 16 21:11:32.495: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:11:32.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9892" for this suite. • [SLOW TEST:7.208 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1521 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":5,"skipped":138,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:11:32.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:11:32.584: INFO: Creating deployment "webserver-deployment" May 16 21:11:32.598: INFO: Waiting for observed generation 1 May 16 21:11:34.608: INFO: Waiting for all required pods to come up May 16 21:11:34.613: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 16 21:11:46.624: INFO: Waiting for deployment "webserver-deployment" to complete May 16 21:11:46.630: INFO: Updating deployment "webserver-deployment" with a non-existent image May 16 21:11:46.636: INFO: Updating deployment webserver-deployment May 16 21:11:46.636: INFO: Waiting for observed generation 2 May 16 21:11:48.656: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 16 21:11:48.659: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 16 21:11:48.661: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 16 21:11:48.667: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 16 21:11:48.667: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 16 21:11:48.669: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 16 21:11:48.672: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 16 21:11:48.672: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 16 21:11:48.676: INFO: Updating deployment webserver-deployment May 16 21:11:48.676: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 16 21:11:49.660: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 16 21:11:52.225: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 16 21:11:53.312: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-8581 /apis/apps/v1/namespaces/deployment-8581/deployments/webserver-deployment 81abbb13-86dc-4a6e-b9f7-9755f68f67e7 16728320 3 2020-05-16 21:11:32 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030599c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-16 21:11:48 +0000 UTC,LastTransitionTime:2020-05-16 21:11:48 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-16 21:11:49 +0000 UTC,LastTransitionTime:2020-05-16 21:11:32 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 16 21:11:53.328: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-8581 /apis/apps/v1/namespaces/deployment-8581/replicasets/webserver-deployment-c7997dcc8 03dca9e5-917a-419b-9232-d7ded9fc0bbe 16728307 3 2020-05-16 21:11:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 81abbb13-86dc-4a6e-b9f7-9755f68f67e7 0xc003059e97 0xc003059e98}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003059f08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 16 21:11:53.328: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 16 21:11:53.328: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-8581 /apis/apps/v1/namespaces/deployment-8581/replicasets/webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 16728315 3 2020-05-16 21:11:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 81abbb13-86dc-4a6e-b9f7-9755f68f67e7 0xc003059dd7 0xc003059dd8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003059e38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 16 21:11:53.575: INFO: Pod "webserver-deployment-595b5b9587-2t9kg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2t9kg webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-2t9kg 8ccf02ed-d05e-421c-9e6f-9d1954161c30 16728347 0 2020-05-16 21:11:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031de3b7 0xc0031de3b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-16 21:11:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.576: INFO: Pod "webserver-deployment-595b5b9587-4dz4x" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4dz4x webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-4dz4x b9be63d9-1df8-4759-aa74-ba3c55d3d6a7 16728362 0 2020-05-16 21:11:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031de517 0xc0031de518}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-16 21:11:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.576: INFO: Pod "webserver-deployment-595b5b9587-6kd7l" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6kd7l webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-6kd7l 973b9bf5-6238-41a9-9c2c-4cfcbd8228fd 16728331 0 2020-05-16 21:11:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031de677 0xc0031de678}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-16 21:11:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.576: INFO: Pod "webserver-deployment-595b5b9587-7cvn7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7cvn7 webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-7cvn7 429faca9-5aab-488a-96c1-6a87e5ef5abd 16728097 0 2020-05-16 21:11:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031de7d7 0xc0031de7d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.224,StartTime:2020-05-16 21:11:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 21:11:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0df97c625c7625b7f69ee50dc29dc1c4c87361bc7ff85ddb5281167051be9f3e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.224,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.577: INFO: Pod "webserver-deployment-595b5b9587-7rrhw" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7rrhw webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-7rrhw f97ca02d-b09f-4b9e-b982-37846b02c42b 16728140 0 2020-05-16 21:11:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031de957 0xc0031de958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.226,StartTime:2020-05-16 21:11:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 21:11:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://562747cfe3e762e16dceee8b389f039de573d2dbfaee35981aa85888e9b541f1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.226,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.577: INFO: Pod "webserver-deployment-595b5b9587-bcqzd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bcqzd webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-bcqzd 49e80023-09b0-44b3-b7d2-5eb415b2ed50 16728333 0 2020-05-16 21:11:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031dead7 0xc0031dead8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-16 21:11:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.577: INFO: Pod "webserver-deployment-595b5b9587-bstqm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bstqm webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-bstqm ee908501-3dca-4335-8396-a847477efd83 16728334 0 2020-05-16 21:11:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031dec37 0xc0031dec38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-16 21:11:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.578: INFO: Pod "webserver-deployment-595b5b9587-dxz5k" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dxz5k webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-dxz5k 42a41e52-be6d-4ab5-9c0c-4651b4583a19 16728135 0 2020-05-16 21:11:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031ded97 0xc0031ded98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.227,StartTime:2020-05-16 21:11:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 21:11:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1349fb9b2071f5b10193e58722681ea88884f8693728407c8e273298bc44f436,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.227,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.578: INFO: Pod "webserver-deployment-595b5b9587-fflk8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fflk8 webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-fflk8 bc28dd21-7256-47e7-a6bb-4dd9cc379250 16728155 0 2020-05-16 21:11:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031def17 0xc0031def18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.228,StartTime:2020-05-16 21:11:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 21:11:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://18939c2c56b976d8fe82944dbd04db43e7678175d2356ef3b4e84151e2235ee5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.228,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.578: INFO: Pod "webserver-deployment-595b5b9587-hpbnk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hpbnk webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-hpbnk fc5d373d-7dff-405c-9a67-ebca2c303faf 16728313 0 2020-05-16 21:11:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031df097 0xc0031df098}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-16 21:11:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.578: INFO: Pod "webserver-deployment-595b5b9587-hzthd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hzthd webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-hzthd 949225d3-a37f-412b-976f-c89cc922a372 16728365 0 2020-05-16 21:11:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031df1f7 0xc0031df1f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-16 21:11:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.578: INFO: Pod "webserver-deployment-595b5b9587-kp5cq" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kp5cq webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-kp5cq 1c58bc3e-c66c-4f92-9d82-1de62ee5fbc5 16728118 0 2020-05-16 21:11:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031df357 0xc0031df358}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.225,StartTime:2020-05-16 21:11:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 21:11:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d2672563c9f40253c0e561d4b31f0d715df0c24b8d676501ca04c2cd2fc2210c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.225,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.579: INFO: Pod "webserver-deployment-595b5b9587-lsbn5" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lsbn5 webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-lsbn5 69cb1f9b-a1f3-4f45-919e-49d74eedb6b5 16728164 0 2020-05-16 21:11:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031df4d7 0xc0031df4d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.35,StartTime:2020-05-16 21:11:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 21:11:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://42351baeeba313e9d783df6cb06614fc3f38bab0621e96d49da73174cb455515,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.35,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.579: INFO: Pod "webserver-deployment-595b5b9587-mbtvt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mbtvt webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-mbtvt 8b57cc68-3cbc-4341-94dd-6cad47d56c83 16728120 0 2020-05-16 21:11:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031df657 0xc0031df658}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.32,StartTime:2020-05-16 21:11:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 21:11:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f94775d988c8f0d22f0fb5af78645622cfb968a7ebb840f9f52a99b100da3fd5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.32,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.579: INFO: Pod "webserver-deployment-595b5b9587-mmhsg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mmhsg webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-mmhsg f25c0455-6f13-46db-a4a4-94a95150c145 16728324 0 2020-05-16 21:11:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031df7d7 0xc0031df7d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-16 21:11:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.579: INFO: Pod "webserver-deployment-595b5b9587-nkssn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nkssn webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-nkssn 07b7771d-424c-4c63-a64b-434a139bbfbd 16728318 0 2020-05-16 21:11:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031df937 0xc0031df938}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-16 21:11:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.580: INFO: Pod "webserver-deployment-595b5b9587-pmfmt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pmfmt webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-pmfmt 5fb7a9b8-5ace-47fa-9fe3-c953cd5def93 16728323 0 2020-05-16 21:11:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031dfa97 0xc0031dfa98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-16 21:11:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.580: INFO: Pod "webserver-deployment-595b5b9587-s9xj2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-s9xj2 webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-s9xj2 b8adab6d-5f31-46e0-8a82-e1ea3fb917f2 16728142 0 2020-05-16 21:11:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031dfbf7 0xc0031dfbf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.33,StartTime:2020-05-16 21:11:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 21:11:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1a3482143def00b92e803b8909f1b36b09fda0c99b35c819e21b8ec0a3933281,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.33,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.580: INFO: Pod "webserver-deployment-595b5b9587-vjcmd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vjcmd webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-vjcmd 6ca16ca6-4bd6-4467-8406-44a338b26c02 16728292 0 2020-05-16 21:11:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031dfd77 0xc0031dfd78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-16 21:11:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.580: INFO: Pod "webserver-deployment-595b5b9587-zf7lg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zf7lg webserver-deployment-595b5b9587- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-595b5b9587-zf7lg 6730caf5-59e6-4d71-993a-7b1feaa0dbc9 16728364 0 2020-05-16 21:11:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c510c74a-e52c-4b6a-8da0-66a430ed884a 0xc0031dfed7 0xc0031dfed8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-16 21:11:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.581: INFO: Pod "webserver-deployment-c7997dcc8-6hcc8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6hcc8 webserver-deployment-c7997dcc8- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-c7997dcc8-6hcc8 f320dd21-4420-449d-ba3d-b07944ae8389 16728358 0 2020-05-16 21:11:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 03dca9e5-917a-419b-9232-d7ded9fc0bbe 0xc003262037 0xc003262038}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-16 21:11:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.581: INFO: Pod "webserver-deployment-c7997dcc8-8kj2g" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8kj2g webserver-deployment-c7997dcc8- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-c7997dcc8-8kj2g 1ec00721-24a2-48c0-b85c-00865bca2cb5 16728387 0 2020-05-16 21:11:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 03dca9e5-917a-419b-9232-d7ded9fc0bbe 0xc0032621b7 0xc0032621b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.230,StartTime:2020-05-16 21:11:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.230,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.581: INFO: Pod "webserver-deployment-c7997dcc8-bxx9k" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bxx9k webserver-deployment-c7997dcc8- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-c7997dcc8-bxx9k 2a957edc-8a92-42f8-9608-797eb8f99d80 16728327 0 2020-05-16 21:11:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 03dca9e5-917a-419b-9232-d7ded9fc0bbe 0xc003262367 0xc003262368}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-16 21:11:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.581: INFO: Pod "webserver-deployment-c7997dcc8-f96bv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f96bv webserver-deployment-c7997dcc8- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-c7997dcc8-f96bv afc4e867-aa2c-4178-8288-543289c6a295 16728369 0 2020-05-16 21:11:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 03dca9e5-917a-419b-9232-d7ded9fc0bbe 0xc0032624e7 0xc0032624e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-16 21:11:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.582: INFO: Pod "webserver-deployment-c7997dcc8-gxrqc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gxrqc webserver-deployment-c7997dcc8- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-c7997dcc8-gxrqc 4ff431dd-b57c-46ec-a798-9fd994758f1e 16728308 0 2020-05-16 21:11:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 03dca9e5-917a-419b-9232-d7ded9fc0bbe 0xc003262667 0xc003262668}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-16 21:11:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.582: INFO: Pod "webserver-deployment-c7997dcc8-hc9ng" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hc9ng webserver-deployment-c7997dcc8- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-c7997dcc8-hc9ng cc7485fc-228a-4060-b5d7-5eb6eda786a9 16728380 0 2020-05-16 21:11:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 03dca9e5-917a-419b-9232-d7ded9fc0bbe 0xc0032627e7 0xc0032627e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.38,StartTime:2020-05-16 21:11:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.38,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.582: INFO: Pod "webserver-deployment-c7997dcc8-n7jtt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-n7jtt webserver-deployment-c7997dcc8- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-c7997dcc8-n7jtt 7d94263e-8404-4632-8c59-0cd41598b3e8 16728388 0 2020-05-16 21:11:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 03dca9e5-917a-419b-9232-d7ded9fc0bbe 0xc003262997 0xc003262998}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.37,StartTime:2020-05-16 21:11:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.37,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.583: INFO: Pod "webserver-deployment-c7997dcc8-nxm2g" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nxm2g webserver-deployment-c7997dcc8- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-c7997dcc8-nxm2g 5abeec10-b244-4d0a-8a1e-b2c2704f5371 16728329 0 2020-05-16 21:11:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 03dca9e5-917a-419b-9232-d7ded9fc0bbe 0xc003262b47 0xc003262b48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-16 21:11:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.584: INFO: Pod "webserver-deployment-c7997dcc8-p2rxg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-p2rxg webserver-deployment-c7997dcc8- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-c7997dcc8-p2rxg f68ce777-4d86-4ccd-b7d7-380c70b5693b 16728352 0 2020-05-16 21:11:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 03dca9e5-917a-419b-9232-d7ded9fc0bbe 0xc003262cc7 0xc003262cc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-16 21:11:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.584: INFO: Pod "webserver-deployment-c7997dcc8-rp47p" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rp47p webserver-deployment-c7997dcc8- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-c7997dcc8-rp47p c288ef28-79c1-4047-967d-a5524caafdbc 16728326 0 2020-05-16 21:11:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 03dca9e5-917a-419b-9232-d7ded9fc0bbe 0xc003262e47 0xc003262e48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-16 21:11:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.585: INFO: Pod "webserver-deployment-c7997dcc8-s65r4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s65r4 webserver-deployment-c7997dcc8- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-c7997dcc8-s65r4 d6eee2e6-1df7-4086-a73c-52894fe4027f 16728374 0 2020-05-16 21:11:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 03dca9e5-917a-419b-9232-d7ded9fc0bbe 0xc003262fc7 0xc003262fc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-16 21:11:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.585: INFO: Pod "webserver-deployment-c7997dcc8-zdzhd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zdzhd webserver-deployment-c7997dcc8- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-c7997dcc8-zdzhd d5fe24c7-e384-4a16-ae9d-9a5ff649e5b2 16728224 0 2020-05-16 21:11:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 03dca9e5-917a-419b-9232-d7ded9fc0bbe 0xc003263147 0xc003263148}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-16 21:11:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:11:53.586: INFO: Pod "webserver-deployment-c7997dcc8-zl4pn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zl4pn webserver-deployment-c7997dcc8- deployment-8581 /api/v1/namespaces/deployment-8581/pods/webserver-deployment-c7997dcc8-zl4pn e3ec64ef-f4d6-4bf3-a7a8-838629085b0d 16728383 0 2020-05-16 21:11:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 03dca9e5-917a-419b-9232-d7ded9fc0bbe 0xc0032632c7 0xc0032632c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7smbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7smbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7smbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:11:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.229,StartTime:2020-05-16 21:11:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.229,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:11:53.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8581" for this suite. • [SLOW TEST:22.774 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":6,"skipped":140,"failed":0} [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:11:55.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:11:56.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8247" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":7,"skipped":140,"failed":0} SSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:11:56.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4938, will wait for the garbage collector to delete the pods May 16 21:12:09.360: INFO: Deleting Job.batch foo took: 35.474895ms May 16 21:12:09.660: INFO: Terminating Job.batch foo pods took: 300.25701ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:12:49.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4938" for this suite. • [SLOW TEST:53.057 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":8,"skipped":143,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:12:49.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-sf97 STEP: Creating a pod to test atomic-volume-subpath May 16 21:12:49.673: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-sf97" in namespace "subpath-1462" to be "success or failure" May 16 21:12:49.675: INFO: Pod "pod-subpath-test-downwardapi-sf97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229286ms May 16 21:12:51.743: INFO: Pod "pod-subpath-test-downwardapi-sf97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070077273s May 16 21:12:53.747: INFO: Pod "pod-subpath-test-downwardapi-sf97": Phase="Running", Reason="", readiness=true. Elapsed: 4.074241882s May 16 21:12:55.752: INFO: Pod "pod-subpath-test-downwardapi-sf97": Phase="Running", Reason="", readiness=true. Elapsed: 6.078860999s May 16 21:12:57.756: INFO: Pod "pod-subpath-test-downwardapi-sf97": Phase="Running", Reason="", readiness=true. Elapsed: 8.083018322s May 16 21:12:59.760: INFO: Pod "pod-subpath-test-downwardapi-sf97": Phase="Running", Reason="", readiness=true. Elapsed: 10.087076121s May 16 21:13:01.765: INFO: Pod "pod-subpath-test-downwardapi-sf97": Phase="Running", Reason="", readiness=true. Elapsed: 12.092090117s May 16 21:13:03.770: INFO: Pod "pod-subpath-test-downwardapi-sf97": Phase="Running", Reason="", readiness=true. Elapsed: 14.096565941s May 16 21:13:05.774: INFO: Pod "pod-subpath-test-downwardapi-sf97": Phase="Running", Reason="", readiness=true. Elapsed: 16.10086713s May 16 21:13:07.777: INFO: Pod "pod-subpath-test-downwardapi-sf97": Phase="Running", Reason="", readiness=true. Elapsed: 18.103706902s May 16 21:13:09.788: INFO: Pod "pod-subpath-test-downwardapi-sf97": Phase="Running", Reason="", readiness=true. Elapsed: 20.115112162s May 16 21:13:11.792: INFO: Pod "pod-subpath-test-downwardapi-sf97": Phase="Running", Reason="", readiness=true. Elapsed: 22.119169484s May 16 21:13:13.870: INFO: Pod "pod-subpath-test-downwardapi-sf97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.196754969s STEP: Saw pod success May 16 21:13:13.870: INFO: Pod "pod-subpath-test-downwardapi-sf97" satisfied condition "success or failure" May 16 21:13:13.874: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-sf97 container test-container-subpath-downwardapi-sf97: STEP: delete the pod May 16 21:13:14.043: INFO: Waiting for pod pod-subpath-test-downwardapi-sf97 to disappear May 16 21:13:14.062: INFO: Pod pod-subpath-test-downwardapi-sf97 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-sf97 May 16 21:13:14.062: INFO: Deleting pod "pod-subpath-test-downwardapi-sf97" in namespace "subpath-1462" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:13:14.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1462" for this suite. • [SLOW TEST:24.469 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":9,"skipped":145,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:13:14.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:13:14.529: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 16 21:13:19.534: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 16 21:13:19.534: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 16 21:13:19.571: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-4083 /apis/apps/v1/namespaces/deployment-4083/deployments/test-cleanup-deployment c6e51a83-1e32-4e1e-bde4-48eddda008fe 16729008 1 2020-05-16 21:13:19 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000f071b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 16 21:13:19.590: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-4083 /apis/apps/v1/namespaces/deployment-4083/replicasets/test-cleanup-deployment-55ffc6b7b6 93a2cddc-972b-464f-8ead-f433b8c0da2e 16729010 1 2020-05-16 21:13:19 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment c6e51a83-1e32-4e1e-bde4-48eddda008fe 0xc00250dde7 0xc00250dde8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00250de58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 16 21:13:19.590: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 16 21:13:19.590: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-4083 /apis/apps/v1/namespaces/deployment-4083/replicasets/test-cleanup-controller 0319b8f3-cccf-484f-9bc2-8e8491b2fb1b 16729009 1 2020-05-16 21:13:14 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment c6e51a83-1e32-4e1e-bde4-48eddda008fe 0xc00250daef 0xc00250db00}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00250dd68 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 16 21:13:19.595: INFO: Pod "test-cleanup-controller-97rbr" is available: &Pod{ObjectMeta:{test-cleanup-controller-97rbr test-cleanup-controller- deployment-4083 /api/v1/namespaces/deployment-4083/pods/test-cleanup-controller-97rbr 4184d9fc-8d62-461c-b42f-d57d8219c277 16728994 0 2020-05-16 21:13:14 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 0319b8f3-cccf-484f-9bc2-8e8491b2fb1b 0xc000642af7 0xc000642af8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-24vwm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-24vwm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-24vwm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:13:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:13:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:13:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:13:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.52,StartTime:2020-05-16 21:13:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 21:13:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2c1ba0c2b14879b14fe739d77a4486199e16b33ef41bb7e5b9a199b912e3039b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.52,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 16 21:13:19.595: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-7jnwh" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-7jnwh test-cleanup-deployment-55ffc6b7b6- deployment-4083 /api/v1/namespaces/deployment-4083/pods/test-cleanup-deployment-55ffc6b7b6-7jnwh 0e4e1125-4452-4b8a-a70b-4632ba1c29ae 16729013 0 2020-05-16 21:13:19 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 93a2cddc-972b-464f-8ead-f433b8c0da2e 0xc000338e17 0xc000338e18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-24vwm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-24vwm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-24vwm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:13:19.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4083" for this suite. • [SLOW TEST:5.733 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":10,"skipped":151,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:13:19.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 16 21:13:19.873: INFO: Waiting up to 5m0s for pod "pod-883244df-54a6-432a-bd3f-d03fd0a82b84" in namespace "emptydir-4804" to be "success or failure" May 16 21:13:19.895: INFO: Pod "pod-883244df-54a6-432a-bd3f-d03fd0a82b84": Phase="Pending", Reason="", readiness=false. Elapsed: 22.443105ms May 16 21:13:21.899: INFO: Pod "pod-883244df-54a6-432a-bd3f-d03fd0a82b84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026654518s May 16 21:13:23.948: INFO: Pod "pod-883244df-54a6-432a-bd3f-d03fd0a82b84": Phase="Running", Reason="", readiness=true. Elapsed: 4.075105269s May 16 21:13:25.951: INFO: Pod "pod-883244df-54a6-432a-bd3f-d03fd0a82b84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.078769393s STEP: Saw pod success May 16 21:13:25.951: INFO: Pod "pod-883244df-54a6-432a-bd3f-d03fd0a82b84" satisfied condition "success or failure" May 16 21:13:25.954: INFO: Trying to get logs from node jerma-worker2 pod pod-883244df-54a6-432a-bd3f-d03fd0a82b84 container test-container: STEP: delete the pod May 16 21:13:25.970: INFO: Waiting for pod pod-883244df-54a6-432a-bd3f-d03fd0a82b84 to disappear May 16 21:13:25.974: INFO: Pod pod-883244df-54a6-432a-bd3f-d03fd0a82b84 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:13:25.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4804" for this suite. • [SLOW TEST:6.176 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":159,"failed":0} SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:13:25.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:13:26.100: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 26.164082ms) May 16 21:13:26.104: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.079559ms) May 16 21:13:26.107: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.164613ms) May 16 21:13:26.110: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.887652ms) May 16 21:13:26.113: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.892311ms) May 16 21:13:26.116: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.81488ms) May 16 21:13:26.119: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.351438ms) May 16 21:13:26.122: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.238632ms) May 16 21:13:26.126: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.91182ms) May 16 21:13:26.130: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.314722ms) May 16 21:13:26.133: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.490135ms) May 16 21:13:26.137: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.043776ms) May 16 21:13:26.141: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.275354ms) May 16 21:13:26.145: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.129161ms) May 16 21:13:26.149: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.049671ms) May 16 21:13:26.152: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.396725ms) May 16 21:13:26.156: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.543183ms) May 16 21:13:26.160: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.413055ms) May 16 21:13:26.166: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 6.551749ms) May 16 21:13:26.187: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 21.088954ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:13:26.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5582" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":12,"skipped":162,"failed":0} SSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:13:26.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:13:26.245: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3347 I0516 21:13:26.269325 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3347, replica count: 1 I0516 21:13:27.319766 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 21:13:28.319985 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 21:13:29.320189 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 21:13:30.320437 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 16 21:13:30.492: INFO: Created: latency-svc-5csds May 16 21:13:30.500: INFO: Got endpoints: latency-svc-5csds [79.381869ms] May 16 21:13:30.538: INFO: Created: latency-svc-t5cnr May 16 21:13:30.554: INFO: Got endpoints: latency-svc-t5cnr [54.603686ms] May 16 21:13:30.572: INFO: Created: latency-svc-t6fdz May 16 21:13:30.590: INFO: Got endpoints: latency-svc-t6fdz [90.312406ms] May 16 21:13:30.631: INFO: Created: latency-svc-sxfs4 May 16 21:13:30.641: INFO: Got endpoints: latency-svc-sxfs4 [141.451813ms] May 16 21:13:30.670: INFO: Created: latency-svc-llg75 May 16 21:13:30.695: INFO: Got endpoints: latency-svc-llg75 [195.685303ms] May 16 21:13:30.717: INFO: Created: latency-svc-kkpd8 May 16 21:13:30.798: INFO: Got endpoints: latency-svc-kkpd8 [298.314915ms] May 16 21:13:30.802: INFO: Created: latency-svc-kmgm8 May 16 21:13:30.809: INFO: Got endpoints: latency-svc-kmgm8 [309.433823ms] May 16 21:13:30.837: INFO: Created: latency-svc-wkl86 May 16 21:13:30.856: INFO: Got endpoints: latency-svc-wkl86 [355.619786ms] May 16 21:13:30.892: INFO: Created: latency-svc-t8hx5 May 16 21:13:30.923: INFO: Got endpoints: latency-svc-t8hx5 [423.474975ms] May 16 21:13:31.016: INFO: Created: latency-svc-hbp99 May 16 21:13:31.079: INFO: Got endpoints: latency-svc-hbp99 [579.098927ms] May 16 21:13:31.083: INFO: Created: latency-svc-dq4k8 May 16 21:13:31.090: INFO: Got endpoints: latency-svc-dq4k8 [590.246747ms] May 16 21:13:31.114: INFO: Created: latency-svc-89zr8 May 16 21:13:31.126: INFO: Got endpoints: latency-svc-89zr8 [625.953424ms] May 16 21:13:31.150: INFO: Created: latency-svc-cj7tj May 16 21:13:31.164: INFO: Got endpoints: latency-svc-cj7tj [664.4991ms] May 16 21:13:31.223: INFO: Created: latency-svc-52cgx May 16 21:13:31.245: INFO: Got endpoints: latency-svc-52cgx [744.829758ms] May 16 21:13:31.282: INFO: Created: latency-svc-stbd7 May 16 21:13:31.303: INFO: Got endpoints: latency-svc-stbd7 [803.121573ms] May 16 21:13:31.379: INFO: Created: latency-svc-6k9cm May 16 21:13:31.386: INFO: Got endpoints: latency-svc-6k9cm [886.367043ms] May 16 21:13:31.416: INFO: Created: latency-svc-l7xjr May 16 21:13:31.427: INFO: Got endpoints: latency-svc-l7xjr [872.693118ms] May 16 21:13:31.455: INFO: Created: latency-svc-h8k6d May 16 21:13:31.469: INFO: Got endpoints: latency-svc-h8k6d [879.358231ms] May 16 21:13:31.522: INFO: Created: latency-svc-wkgh4 May 16 21:13:31.526: INFO: Got endpoints: latency-svc-wkgh4 [884.457843ms] May 16 21:13:31.551: INFO: Created: latency-svc-7k67j May 16 21:13:31.566: INFO: Got endpoints: latency-svc-7k67j [870.12627ms] May 16 21:13:31.594: INFO: Created: latency-svc-8khk9 May 16 21:13:31.666: INFO: Got endpoints: latency-svc-8khk9 [868.117281ms] May 16 21:13:31.691: INFO: Created: latency-svc-2hl4x May 16 21:13:31.724: INFO: Got endpoints: latency-svc-2hl4x [914.729007ms] May 16 21:13:31.822: INFO: Created: latency-svc-9drrj May 16 21:13:31.850: INFO: Got endpoints: latency-svc-9drrj [994.542009ms] May 16 21:13:31.892: INFO: Created: latency-svc-5skdr May 16 21:13:31.936: INFO: Got endpoints: latency-svc-5skdr [1.012371215s] May 16 21:13:31.996: INFO: Created: latency-svc-r8zht May 16 21:13:32.025: INFO: Got endpoints: latency-svc-r8zht [945.812552ms] May 16 21:13:32.088: INFO: Created: latency-svc-pjkd2 May 16 21:13:32.108: INFO: Got endpoints: latency-svc-pjkd2 [1.017616576s] May 16 21:13:32.139: INFO: Created: latency-svc-q5ms4 May 16 21:13:32.168: INFO: Got endpoints: latency-svc-q5ms4 [1.042169358s] May 16 21:13:32.230: INFO: Created: latency-svc-4bnfm May 16 21:13:32.233: INFO: Got endpoints: latency-svc-4bnfm [1.069017699s] May 16 21:13:32.271: INFO: Created: latency-svc-wzl5b May 16 21:13:32.281: INFO: Got endpoints: latency-svc-wzl5b [1.036751748s] May 16 21:13:32.397: INFO: Created: latency-svc-vxvxc May 16 21:13:32.451: INFO: Created: latency-svc-5k9hg May 16 21:13:32.451: INFO: Got endpoints: latency-svc-vxvxc [1.148288706s] May 16 21:13:32.468: INFO: Got endpoints: latency-svc-5k9hg [1.081707916s] May 16 21:13:32.493: INFO: Created: latency-svc-kmr9t May 16 21:13:32.541: INFO: Got endpoints: latency-svc-kmr9t [1.113438115s] May 16 21:13:32.565: INFO: Created: latency-svc-9nc7k May 16 21:13:32.578: INFO: Got endpoints: latency-svc-9nc7k [1.108170208s] May 16 21:13:32.606: INFO: Created: latency-svc-b7kmd May 16 21:13:32.638: INFO: Got endpoints: latency-svc-b7kmd [1.112028418s] May 16 21:13:32.727: INFO: Created: latency-svc-nnv48 May 16 21:13:32.746: INFO: Got endpoints: latency-svc-nnv48 [1.180755745s] May 16 21:13:32.775: INFO: Created: latency-svc-znb65 May 16 21:13:32.790: INFO: Got endpoints: latency-svc-znb65 [1.123730386s] May 16 21:13:32.858: INFO: Created: latency-svc-2gggk May 16 21:13:32.865: INFO: Got endpoints: latency-svc-2gggk [1.141171746s] May 16 21:13:33.032: INFO: Created: latency-svc-hvvst May 16 21:13:33.051: INFO: Got endpoints: latency-svc-hvvst [1.200563424s] May 16 21:13:33.082: INFO: Created: latency-svc-lln47 May 16 21:13:33.110: INFO: Got endpoints: latency-svc-lln47 [1.174282546s] May 16 21:13:33.175: INFO: Created: latency-svc-nrbr8 May 16 21:13:33.185: INFO: Got endpoints: latency-svc-nrbr8 [1.159712979s] May 16 21:13:33.213: INFO: Created: latency-svc-9gfx4 May 16 21:13:33.228: INFO: Got endpoints: latency-svc-9gfx4 [1.119893069s] May 16 21:13:33.273: INFO: Created: latency-svc-jgctr May 16 21:13:33.312: INFO: Got endpoints: latency-svc-jgctr [1.144171202s] May 16 21:13:33.368: INFO: Created: latency-svc-995zt May 16 21:13:33.382: INFO: Got endpoints: latency-svc-995zt [1.148748033s] May 16 21:13:33.410: INFO: Created: latency-svc-7xwqn May 16 21:13:33.468: INFO: Got endpoints: latency-svc-7xwqn [1.186615794s] May 16 21:13:33.494: INFO: Created: latency-svc-xr85h May 16 21:13:33.508: INFO: Got endpoints: latency-svc-xr85h [1.056749544s] May 16 21:13:33.537: INFO: Created: latency-svc-chgzm May 16 21:13:33.551: INFO: Got endpoints: latency-svc-chgzm [1.082735888s] May 16 21:13:33.644: INFO: Created: latency-svc-s8mms May 16 21:13:33.653: INFO: Got endpoints: latency-svc-s8mms [1.1119882s] May 16 21:13:33.681: INFO: Created: latency-svc-mdjll May 16 21:13:33.695: INFO: Got endpoints: latency-svc-mdjll [1.117221679s] May 16 21:13:33.805: INFO: Created: latency-svc-xkzp9 May 16 21:13:33.827: INFO: Got endpoints: latency-svc-xkzp9 [1.18933231s] May 16 21:13:33.890: INFO: Created: latency-svc-4n4xb May 16 21:13:33.941: INFO: Got endpoints: latency-svc-4n4xb [1.194839877s] May 16 21:13:34.005: INFO: Created: latency-svc-8qnn4 May 16 21:13:34.019: INFO: Got endpoints: latency-svc-8qnn4 [1.229326846s] May 16 21:13:34.092: INFO: Created: latency-svc-b8dz8 May 16 21:13:34.095: INFO: Got endpoints: latency-svc-b8dz8 [1.22957457s] May 16 21:13:34.118: INFO: Created: latency-svc-cw7pw May 16 21:13:34.134: INFO: Got endpoints: latency-svc-cw7pw [1.08300035s] May 16 21:13:34.160: INFO: Created: latency-svc-jm6xp May 16 21:13:34.185: INFO: Got endpoints: latency-svc-jm6xp [1.074399087s] May 16 21:13:34.247: INFO: Created: latency-svc-qsqc5 May 16 21:13:34.260: INFO: Got endpoints: latency-svc-qsqc5 [1.075518409s] May 16 21:13:34.279: INFO: Created: latency-svc-kcd7n May 16 21:13:34.310: INFO: Got endpoints: latency-svc-kcd7n [1.082054084s] May 16 21:13:34.397: INFO: Created: latency-svc-9twnv May 16 21:13:34.405: INFO: Got endpoints: latency-svc-9twnv [1.092390569s] May 16 21:13:34.431: INFO: Created: latency-svc-wkck4 May 16 21:13:34.441: INFO: Got endpoints: latency-svc-wkck4 [1.058825024s] May 16 21:13:34.471: INFO: Created: latency-svc-26rmr May 16 21:13:34.489: INFO: Got endpoints: latency-svc-26rmr [1.021099214s] May 16 21:13:34.562: INFO: Created: latency-svc-qbbqw May 16 21:13:34.579: INFO: Got endpoints: latency-svc-qbbqw [1.071218042s] May 16 21:13:34.598: INFO: Created: latency-svc-w4nk4 May 16 21:13:34.617: INFO: Got endpoints: latency-svc-w4nk4 [1.065722902s] May 16 21:13:34.684: INFO: Created: latency-svc-kwtdl May 16 21:13:34.687: INFO: Got endpoints: latency-svc-kwtdl [1.034460294s] May 16 21:13:34.735: INFO: Created: latency-svc-gl78q May 16 21:13:34.748: INFO: Got endpoints: latency-svc-gl78q [1.053099157s] May 16 21:13:34.834: INFO: Created: latency-svc-bs2rx May 16 21:13:34.840: INFO: Got endpoints: latency-svc-bs2rx [1.012461481s] May 16 21:13:34.899: INFO: Created: latency-svc-lvfkn May 16 21:13:34.929: INFO: Got endpoints: latency-svc-lvfkn [987.861646ms] May 16 21:13:34.991: INFO: Created: latency-svc-sgbkq May 16 21:13:34.996: INFO: Got endpoints: latency-svc-sgbkq [976.674981ms] May 16 21:13:35.042: INFO: Created: latency-svc-qj88g May 16 21:13:35.056: INFO: Got endpoints: latency-svc-qj88g [960.739785ms] May 16 21:13:35.085: INFO: Created: latency-svc-ng8rk May 16 21:13:35.121: INFO: Got endpoints: latency-svc-ng8rk [986.814251ms] May 16 21:13:35.191: INFO: Created: latency-svc-s9hl6 May 16 21:13:35.206: INFO: Got endpoints: latency-svc-s9hl6 [1.020627116s] May 16 21:13:35.297: INFO: Created: latency-svc-5gd6b May 16 21:13:35.300: INFO: Got endpoints: latency-svc-5gd6b [1.039147039s] May 16 21:13:35.344: INFO: Created: latency-svc-frrk2 May 16 21:13:35.363: INFO: Got endpoints: latency-svc-frrk2 [1.053470163s] May 16 21:13:35.390: INFO: Created: latency-svc-dbxff May 16 21:13:35.438: INFO: Got endpoints: latency-svc-dbxff [1.033583194s] May 16 21:13:35.440: INFO: Created: latency-svc-9vwpv May 16 21:13:35.453: INFO: Got endpoints: latency-svc-9vwpv [1.011986567s] May 16 21:13:35.479: INFO: Created: latency-svc-lf6zb May 16 21:13:35.492: INFO: Got endpoints: latency-svc-lf6zb [1.00243937s] May 16 21:13:35.516: INFO: Created: latency-svc-lqr5w May 16 21:13:35.533: INFO: Got endpoints: latency-svc-lqr5w [953.552033ms] May 16 21:13:35.583: INFO: Created: latency-svc-lpjdz May 16 21:13:35.592: INFO: Got endpoints: latency-svc-lpjdz [975.129022ms] May 16 21:13:35.613: INFO: Created: latency-svc-847jj May 16 21:13:35.629: INFO: Got endpoints: latency-svc-847jj [941.461437ms] May 16 21:13:35.647: INFO: Created: latency-svc-bq2k7 May 16 21:13:35.665: INFO: Got endpoints: latency-svc-bq2k7 [917.267781ms] May 16 21:13:35.732: INFO: Created: latency-svc-lk659 May 16 21:13:35.734: INFO: Got endpoints: latency-svc-lk659 [894.677837ms] May 16 21:13:35.810: INFO: Created: latency-svc-pwvq4 May 16 21:13:35.827: INFO: Got endpoints: latency-svc-pwvq4 [898.252222ms] May 16 21:13:35.894: INFO: Created: latency-svc-kt7pc May 16 21:13:35.900: INFO: Got endpoints: latency-svc-kt7pc [903.479771ms] May 16 21:13:35.947: INFO: Created: latency-svc-c5dnd May 16 21:13:35.960: INFO: Got endpoints: latency-svc-c5dnd [903.940094ms] May 16 21:13:36.044: INFO: Created: latency-svc-8dtlb May 16 21:13:36.056: INFO: Got endpoints: latency-svc-8dtlb [934.76819ms] May 16 21:13:36.079: INFO: Created: latency-svc-6dzdt May 16 21:13:36.092: INFO: Got endpoints: latency-svc-6dzdt [886.575603ms] May 16 21:13:36.115: INFO: Created: latency-svc-v2vh8 May 16 21:13:36.128: INFO: Got endpoints: latency-svc-v2vh8 [828.517234ms] May 16 21:13:36.184: INFO: Created: latency-svc-mpr9z May 16 21:13:36.188: INFO: Got endpoints: latency-svc-mpr9z [824.849921ms] May 16 21:13:36.218: INFO: Created: latency-svc-x774q May 16 21:13:36.231: INFO: Got endpoints: latency-svc-x774q [792.163618ms] May 16 21:13:36.248: INFO: Created: latency-svc-l8s2w May 16 21:13:36.267: INFO: Got endpoints: latency-svc-l8s2w [813.896981ms] May 16 21:13:36.326: INFO: Created: latency-svc-hsc99 May 16 21:13:36.328: INFO: Got endpoints: latency-svc-hsc99 [836.065647ms] May 16 21:13:36.373: INFO: Created: latency-svc-48q9s May 16 21:13:36.387: INFO: Got endpoints: latency-svc-48q9s [854.353854ms] May 16 21:13:36.415: INFO: Created: latency-svc-4dwkb May 16 21:13:36.493: INFO: Got endpoints: latency-svc-4dwkb [900.463934ms] May 16 21:13:36.518: INFO: Created: latency-svc-685vh May 16 21:13:36.532: INFO: Got endpoints: latency-svc-685vh [902.965831ms] May 16 21:13:36.561: INFO: Created: latency-svc-sxg8d May 16 21:13:36.580: INFO: Got endpoints: latency-svc-sxg8d [914.676205ms] May 16 21:13:36.631: INFO: Created: latency-svc-xkbl8 May 16 21:13:36.641: INFO: Got endpoints: latency-svc-xkbl8 [906.564266ms] May 16 21:13:36.667: INFO: Created: latency-svc-pxvft May 16 21:13:36.677: INFO: Got endpoints: latency-svc-pxvft [849.542854ms] May 16 21:13:36.697: INFO: Created: latency-svc-z8wlh May 16 21:13:36.707: INFO: Got endpoints: latency-svc-z8wlh [807.448872ms] May 16 21:13:36.727: INFO: Created: latency-svc-p9t5w May 16 21:13:36.780: INFO: Got endpoints: latency-svc-p9t5w [820.138035ms] May 16 21:13:36.788: INFO: Created: latency-svc-kflkl May 16 21:13:36.804: INFO: Got endpoints: latency-svc-kflkl [748.276492ms] May 16 21:13:36.824: INFO: Created: latency-svc-qbmct May 16 21:13:36.840: INFO: Got endpoints: latency-svc-qbmct [747.658362ms] May 16 21:13:36.865: INFO: Created: latency-svc-q4s9q May 16 21:13:36.876: INFO: Got endpoints: latency-svc-q4s9q [747.827567ms] May 16 21:13:36.942: INFO: Created: latency-svc-rbbxx May 16 21:13:36.948: INFO: Got endpoints: latency-svc-rbbxx [759.662503ms] May 16 21:13:37.010: INFO: Created: latency-svc-nhlqh May 16 21:13:37.032: INFO: Got endpoints: latency-svc-nhlqh [801.687426ms] May 16 21:13:37.098: INFO: Created: latency-svc-lnm58 May 16 21:13:37.129: INFO: Created: latency-svc-6889t May 16 21:13:37.129: INFO: Got endpoints: latency-svc-lnm58 [862.118678ms] May 16 21:13:37.178: INFO: Got endpoints: latency-svc-6889t [850.132664ms] May 16 21:13:37.247: INFO: Created: latency-svc-dcnks May 16 21:13:37.250: INFO: Got endpoints: latency-svc-dcnks [862.563538ms] May 16 21:13:37.298: INFO: Created: latency-svc-2zwxz May 16 21:13:37.315: INFO: Got endpoints: latency-svc-2zwxz [822.365461ms] May 16 21:13:37.384: INFO: Created: latency-svc-pfp59 May 16 21:13:37.389: INFO: Got endpoints: latency-svc-pfp59 [857.224316ms] May 16 21:13:37.436: INFO: Created: latency-svc-lbrks May 16 21:13:37.448: INFO: Got endpoints: latency-svc-lbrks [867.328504ms] May 16 21:13:37.466: INFO: Created: latency-svc-msxb4 May 16 21:13:37.576: INFO: Got endpoints: latency-svc-msxb4 [935.133249ms] May 16 21:13:37.592: INFO: Created: latency-svc-67fvm May 16 21:13:37.626: INFO: Created: latency-svc-9swpk May 16 21:13:37.626: INFO: Got endpoints: latency-svc-67fvm [949.255642ms] May 16 21:13:37.640: INFO: Got endpoints: latency-svc-9swpk [932.544881ms] May 16 21:13:37.659: INFO: Created: latency-svc-kj4x6 May 16 21:13:37.670: INFO: Got endpoints: latency-svc-kj4x6 [890.069007ms] May 16 21:13:37.738: INFO: Created: latency-svc-vfw4w May 16 21:13:37.741: INFO: Got endpoints: latency-svc-vfw4w [937.130344ms] May 16 21:13:37.788: INFO: Created: latency-svc-rh29g May 16 21:13:37.803: INFO: Got endpoints: latency-svc-rh29g [962.887698ms] May 16 21:13:37.825: INFO: Created: latency-svc-xhttz May 16 21:13:37.887: INFO: Got endpoints: latency-svc-xhttz [1.011397004s] May 16 21:13:37.890: INFO: Created: latency-svc-9pg6g May 16 21:13:37.899: INFO: Got endpoints: latency-svc-9pg6g [951.334406ms] May 16 21:13:37.922: INFO: Created: latency-svc-kx7kw May 16 21:13:37.935: INFO: Got endpoints: latency-svc-kx7kw [902.94066ms] May 16 21:13:37.959: INFO: Created: latency-svc-xjx5p May 16 21:13:37.972: INFO: Got endpoints: latency-svc-xjx5p [842.199825ms] May 16 21:13:38.038: INFO: Created: latency-svc-9xvqn May 16 21:13:38.044: INFO: Got endpoints: latency-svc-9xvqn [865.690701ms] May 16 21:13:38.126: INFO: Created: latency-svc-l8fkq May 16 21:13:38.164: INFO: Got endpoints: latency-svc-l8fkq [913.409376ms] May 16 21:13:38.192: INFO: Created: latency-svc-rmgtp May 16 21:13:38.210: INFO: Got endpoints: latency-svc-rmgtp [894.768717ms] May 16 21:13:38.238: INFO: Created: latency-svc-4p8c7 May 16 21:13:38.255: INFO: Got endpoints: latency-svc-4p8c7 [865.933895ms] May 16 21:13:38.295: INFO: Created: latency-svc-27q4q May 16 21:13:38.298: INFO: Got endpoints: latency-svc-27q4q [850.014767ms] May 16 21:13:38.366: INFO: Created: latency-svc-2xvcx May 16 21:13:38.375: INFO: Got endpoints: latency-svc-2xvcx [799.070515ms] May 16 21:13:38.462: INFO: Created: latency-svc-t5xhw May 16 21:13:38.466: INFO: Got endpoints: latency-svc-t5xhw [839.764747ms] May 16 21:13:38.544: INFO: Created: latency-svc-g46fl May 16 21:13:38.561: INFO: Got endpoints: latency-svc-g46fl [921.646013ms] May 16 21:13:38.607: INFO: Created: latency-svc-pr4pr May 16 21:13:38.609: INFO: Got endpoints: latency-svc-pr4pr [939.430629ms] May 16 21:13:38.663: INFO: Created: latency-svc-x6jz2 May 16 21:13:38.676: INFO: Got endpoints: latency-svc-x6jz2 [934.819835ms] May 16 21:13:38.700: INFO: Created: latency-svc-4qgr2 May 16 21:13:38.756: INFO: Got endpoints: latency-svc-4qgr2 [953.044668ms] May 16 21:13:38.760: INFO: Created: latency-svc-kpv9f May 16 21:13:38.778: INFO: Got endpoints: latency-svc-kpv9f [890.922094ms] May 16 21:13:38.797: INFO: Created: latency-svc-rnz22 May 16 21:13:38.815: INFO: Got endpoints: latency-svc-rnz22 [915.427459ms] May 16 21:13:38.833: INFO: Created: latency-svc-gjgzn May 16 21:13:38.846: INFO: Got endpoints: latency-svc-gjgzn [910.314114ms] May 16 21:13:38.888: INFO: Created: latency-svc-97mzz May 16 21:13:38.899: INFO: Got endpoints: latency-svc-97mzz [927.472433ms] May 16 21:13:38.940: INFO: Created: latency-svc-74n2k May 16 21:13:38.960: INFO: Got endpoints: latency-svc-74n2k [915.82701ms] May 16 21:13:39.050: INFO: Created: latency-svc-p78k2 May 16 21:13:39.054: INFO: Got endpoints: latency-svc-p78k2 [890.692284ms] May 16 21:13:39.103: INFO: Created: latency-svc-29jbc May 16 21:13:39.116: INFO: Got endpoints: latency-svc-29jbc [906.08837ms] May 16 21:13:39.150: INFO: Created: latency-svc-p6d4p May 16 21:13:39.241: INFO: Got endpoints: latency-svc-p6d4p [985.993567ms] May 16 21:13:39.244: INFO: Created: latency-svc-xdlwt May 16 21:13:39.254: INFO: Got endpoints: latency-svc-xdlwt [956.49943ms] May 16 21:13:39.290: INFO: Created: latency-svc-vvflx May 16 21:13:39.302: INFO: Got endpoints: latency-svc-vvflx [926.911849ms] May 16 21:13:39.331: INFO: Created: latency-svc-7z4gc May 16 21:13:39.397: INFO: Got endpoints: latency-svc-7z4gc [930.572261ms] May 16 21:13:39.438: INFO: Created: latency-svc-x2dlk May 16 21:13:39.454: INFO: Got endpoints: latency-svc-x2dlk [892.651596ms] May 16 21:13:39.475: INFO: Created: latency-svc-nmvc7 May 16 21:13:39.483: INFO: Got endpoints: latency-svc-nmvc7 [873.502784ms] May 16 21:13:39.535: INFO: Created: latency-svc-v5v9h May 16 21:13:39.559: INFO: Got endpoints: latency-svc-v5v9h [883.313082ms] May 16 21:13:39.595: INFO: Created: latency-svc-znnd5 May 16 21:13:39.610: INFO: Got endpoints: latency-svc-znnd5 [853.736511ms] May 16 21:13:39.631: INFO: Created: latency-svc-bkht4 May 16 21:13:39.684: INFO: Got endpoints: latency-svc-bkht4 [905.403986ms] May 16 21:13:39.709: INFO: Created: latency-svc-xbtnp May 16 21:13:39.718: INFO: Got endpoints: latency-svc-xbtnp [903.352136ms] May 16 21:13:39.751: INFO: Created: latency-svc-2gn2c May 16 21:13:39.767: INFO: Got endpoints: latency-svc-2gn2c [920.769161ms] May 16 21:13:39.823: INFO: Created: latency-svc-hsgsw May 16 21:13:39.903: INFO: Got endpoints: latency-svc-hsgsw [1.003417021s] May 16 21:13:39.965: INFO: Created: latency-svc-vbvv2 May 16 21:13:39.969: INFO: Got endpoints: latency-svc-vbvv2 [1.009451348s] May 16 21:13:40.002: INFO: Created: latency-svc-658j7 May 16 21:13:40.050: INFO: Got endpoints: latency-svc-658j7 [996.175121ms] May 16 21:13:40.127: INFO: Created: latency-svc-6zq25 May 16 21:13:40.130: INFO: Got endpoints: latency-svc-6zq25 [1.01419933s] May 16 21:13:40.161: INFO: Created: latency-svc-7h297 May 16 21:13:40.176: INFO: Got endpoints: latency-svc-7h297 [935.324152ms] May 16 21:13:40.201: INFO: Created: latency-svc-kbhgd May 16 21:13:40.213: INFO: Got endpoints: latency-svc-kbhgd [958.897975ms] May 16 21:13:40.271: INFO: Created: latency-svc-tsf95 May 16 21:13:40.297: INFO: Created: latency-svc-4gnlm May 16 21:13:40.298: INFO: Got endpoints: latency-svc-tsf95 [995.207723ms] May 16 21:13:40.310: INFO: Got endpoints: latency-svc-4gnlm [912.710563ms] May 16 21:13:40.334: INFO: Created: latency-svc-c67z9 May 16 21:13:40.346: INFO: Got endpoints: latency-svc-c67z9 [891.690984ms] May 16 21:13:40.421: INFO: Created: latency-svc-zk7b6 May 16 21:13:40.447: INFO: Created: latency-svc-j72tl May 16 21:13:40.447: INFO: Got endpoints: latency-svc-zk7b6 [964.498079ms] May 16 21:13:40.471: INFO: Got endpoints: latency-svc-j72tl [911.729647ms] May 16 21:13:40.502: INFO: Created: latency-svc-wz7xk May 16 21:13:40.514: INFO: Got endpoints: latency-svc-wz7xk [904.645499ms] May 16 21:13:40.560: INFO: Created: latency-svc-wgsxk May 16 21:13:40.572: INFO: Got endpoints: latency-svc-wgsxk [887.798219ms] May 16 21:13:40.602: INFO: Created: latency-svc-c9k5l May 16 21:13:40.617: INFO: Got endpoints: latency-svc-c9k5l [898.543479ms] May 16 21:13:40.638: INFO: Created: latency-svc-jmsv2 May 16 21:13:40.653: INFO: Got endpoints: latency-svc-jmsv2 [886.507999ms] May 16 21:13:40.702: INFO: Created: latency-svc-4kgnp May 16 21:13:40.730: INFO: Got endpoints: latency-svc-4kgnp [827.114451ms] May 16 21:13:40.730: INFO: Created: latency-svc-bqs6d May 16 21:13:40.743: INFO: Got endpoints: latency-svc-bqs6d [774.146722ms] May 16 21:13:40.777: INFO: Created: latency-svc-ck98j May 16 21:13:40.870: INFO: Got endpoints: latency-svc-ck98j [819.373289ms] May 16 21:13:40.872: INFO: Created: latency-svc-6v7wc May 16 21:13:40.882: INFO: Got endpoints: latency-svc-6v7wc [751.902346ms] May 16 21:13:40.904: INFO: Created: latency-svc-264sr May 16 21:13:40.925: INFO: Got endpoints: latency-svc-264sr [748.577991ms] May 16 21:13:40.945: INFO: Created: latency-svc-qj748 May 16 21:13:40.948: INFO: Got endpoints: latency-svc-qj748 [734.950557ms] May 16 21:13:41.026: INFO: Created: latency-svc-btp8h May 16 21:13:41.064: INFO: Got endpoints: latency-svc-btp8h [766.167837ms] May 16 21:13:41.064: INFO: Created: latency-svc-5xj44 May 16 21:13:41.081: INFO: Got endpoints: latency-svc-5xj44 [771.536918ms] May 16 21:13:41.100: INFO: Created: latency-svc-6p85z May 16 21:13:41.151: INFO: Got endpoints: latency-svc-6p85z [805.133517ms] May 16 21:13:41.161: INFO: Created: latency-svc-7kr9t May 16 21:13:41.178: INFO: Got endpoints: latency-svc-7kr9t [730.090797ms] May 16 21:13:41.209: INFO: Created: latency-svc-xncfw May 16 21:13:41.225: INFO: Got endpoints: latency-svc-xncfw [754.397467ms] May 16 21:13:41.355: INFO: Created: latency-svc-v9d72 May 16 21:13:41.357: INFO: Got endpoints: latency-svc-v9d72 [842.984482ms] May 16 21:13:41.383: INFO: Created: latency-svc-v6tc6 May 16 21:13:41.400: INFO: Got endpoints: latency-svc-v6tc6 [828.073567ms] May 16 21:13:41.443: INFO: Created: latency-svc-64w4l May 16 21:13:41.486: INFO: Got endpoints: latency-svc-64w4l [869.434481ms] May 16 21:13:41.514: INFO: Created: latency-svc-bxmp5 May 16 21:13:41.526: INFO: Got endpoints: latency-svc-bxmp5 [873.341891ms] May 16 21:13:41.550: INFO: Created: latency-svc-5qcxm May 16 21:13:41.563: INFO: Got endpoints: latency-svc-5qcxm [833.274041ms] May 16 21:13:41.579: INFO: Created: latency-svc-xwkj7 May 16 21:13:41.631: INFO: Got endpoints: latency-svc-xwkj7 [887.167015ms] May 16 21:13:41.633: INFO: Created: latency-svc-n6v6h May 16 21:13:41.641: INFO: Got endpoints: latency-svc-n6v6h [771.394148ms] May 16 21:13:41.688: INFO: Created: latency-svc-2564x May 16 21:13:41.703: INFO: Got endpoints: latency-svc-2564x [820.883692ms] May 16 21:13:41.724: INFO: Created: latency-svc-c784s May 16 21:13:41.786: INFO: Got endpoints: latency-svc-c784s [860.760508ms] May 16 21:13:41.788: INFO: Created: latency-svc-bzpx4 May 16 21:13:41.797: INFO: Got endpoints: latency-svc-bzpx4 [849.275084ms] May 16 21:13:41.822: INFO: Created: latency-svc-g6zfg May 16 21:13:41.834: INFO: Got endpoints: latency-svc-g6zfg [770.03869ms] May 16 21:13:41.858: INFO: Created: latency-svc-prs7z May 16 21:13:41.942: INFO: Got endpoints: latency-svc-prs7z [861.111435ms] May 16 21:13:41.944: INFO: Created: latency-svc-slq6b May 16 21:13:41.955: INFO: Got endpoints: latency-svc-slq6b [803.756496ms] May 16 21:13:41.976: INFO: Created: latency-svc-ww5sw May 16 21:13:41.990: INFO: Got endpoints: latency-svc-ww5sw [812.79505ms] May 16 21:13:42.013: INFO: Created: latency-svc-9swsf May 16 21:13:42.027: INFO: Got endpoints: latency-svc-9swsf [801.236425ms] May 16 21:13:42.091: INFO: Created: latency-svc-4sntw May 16 21:13:42.144: INFO: Created: latency-svc-c2hxt May 16 21:13:42.144: INFO: Got endpoints: latency-svc-4sntw [786.824963ms] May 16 21:13:42.159: INFO: Got endpoints: latency-svc-c2hxt [759.365735ms] May 16 21:13:42.180: INFO: Created: latency-svc-76rt2 May 16 21:13:42.241: INFO: Got endpoints: latency-svc-76rt2 [754.583024ms] May 16 21:13:42.253: INFO: Created: latency-svc-jnj8b May 16 21:13:42.269: INFO: Got endpoints: latency-svc-jnj8b [742.528133ms] May 16 21:13:42.337: INFO: Created: latency-svc-42phg May 16 21:13:42.385: INFO: Got endpoints: latency-svc-42phg [821.637711ms] May 16 21:13:42.414: INFO: Created: latency-svc-xsm6j May 16 21:13:42.425: INFO: Got endpoints: latency-svc-xsm6j [794.269531ms] May 16 21:13:42.444: INFO: Created: latency-svc-66wz5 May 16 21:13:42.448: INFO: Got endpoints: latency-svc-66wz5 [806.790325ms] May 16 21:13:42.475: INFO: Created: latency-svc-zf97m May 16 21:13:42.479: INFO: Got endpoints: latency-svc-zf97m [775.542862ms] May 16 21:13:42.523: INFO: Created: latency-svc-49srt May 16 21:13:42.527: INFO: Got endpoints: latency-svc-49srt [741.023638ms] May 16 21:13:42.576: INFO: Created: latency-svc-2f8v6 May 16 21:13:42.593: INFO: Got endpoints: latency-svc-2f8v6 [795.995242ms] May 16 21:13:42.612: INFO: Created: latency-svc-84tg4 May 16 21:13:42.690: INFO: Got endpoints: latency-svc-84tg4 [856.205475ms] May 16 21:13:42.727: INFO: Created: latency-svc-f4nz8 May 16 21:13:42.738: INFO: Got endpoints: latency-svc-f4nz8 [795.3694ms] May 16 21:13:42.762: INFO: Created: latency-svc-tk7pm May 16 21:13:42.780: INFO: Got endpoints: latency-svc-tk7pm [824.943256ms] May 16 21:13:42.780: INFO: Latencies: [54.603686ms 90.312406ms 141.451813ms 195.685303ms 298.314915ms 309.433823ms 355.619786ms 423.474975ms 579.098927ms 590.246747ms 625.953424ms 664.4991ms 730.090797ms 734.950557ms 741.023638ms 742.528133ms 744.829758ms 747.658362ms 747.827567ms 748.276492ms 748.577991ms 751.902346ms 754.397467ms 754.583024ms 759.365735ms 759.662503ms 766.167837ms 770.03869ms 771.394148ms 771.536918ms 774.146722ms 775.542862ms 786.824963ms 792.163618ms 794.269531ms 795.3694ms 795.995242ms 799.070515ms 801.236425ms 801.687426ms 803.121573ms 803.756496ms 805.133517ms 806.790325ms 807.448872ms 812.79505ms 813.896981ms 819.373289ms 820.138035ms 820.883692ms 821.637711ms 822.365461ms 824.849921ms 824.943256ms 827.114451ms 828.073567ms 828.517234ms 833.274041ms 836.065647ms 839.764747ms 842.199825ms 842.984482ms 849.275084ms 849.542854ms 850.014767ms 850.132664ms 853.736511ms 854.353854ms 856.205475ms 857.224316ms 860.760508ms 861.111435ms 862.118678ms 862.563538ms 865.690701ms 865.933895ms 867.328504ms 868.117281ms 869.434481ms 870.12627ms 872.693118ms 873.341891ms 873.502784ms 879.358231ms 883.313082ms 884.457843ms 886.367043ms 886.507999ms 886.575603ms 887.167015ms 887.798219ms 890.069007ms 890.692284ms 890.922094ms 891.690984ms 892.651596ms 894.677837ms 894.768717ms 898.252222ms 898.543479ms 900.463934ms 902.94066ms 902.965831ms 903.352136ms 903.479771ms 903.940094ms 904.645499ms 905.403986ms 906.08837ms 906.564266ms 910.314114ms 911.729647ms 912.710563ms 913.409376ms 914.676205ms 914.729007ms 915.427459ms 915.82701ms 917.267781ms 920.769161ms 921.646013ms 926.911849ms 927.472433ms 930.572261ms 932.544881ms 934.76819ms 934.819835ms 935.133249ms 935.324152ms 937.130344ms 939.430629ms 941.461437ms 945.812552ms 949.255642ms 951.334406ms 953.044668ms 953.552033ms 956.49943ms 958.897975ms 960.739785ms 962.887698ms 964.498079ms 975.129022ms 976.674981ms 985.993567ms 986.814251ms 987.861646ms 994.542009ms 995.207723ms 996.175121ms 1.00243937s 1.003417021s 1.009451348s 1.011397004s 1.011986567s 1.012371215s 1.012461481s 1.01419933s 1.017616576s 1.020627116s 1.021099214s 1.033583194s 1.034460294s 1.036751748s 1.039147039s 1.042169358s 1.053099157s 1.053470163s 1.056749544s 1.058825024s 1.065722902s 1.069017699s 1.071218042s 1.074399087s 1.075518409s 1.081707916s 1.082054084s 1.082735888s 1.08300035s 1.092390569s 1.108170208s 1.1119882s 1.112028418s 1.113438115s 1.117221679s 1.119893069s 1.123730386s 1.141171746s 1.144171202s 1.148288706s 1.148748033s 1.159712979s 1.174282546s 1.180755745s 1.186615794s 1.18933231s 1.194839877s 1.200563424s 1.229326846s 1.22957457s] May 16 21:13:42.780: INFO: 50 %ile: 900.463934ms May 16 21:13:42.780: INFO: 90 %ile: 1.108170208s May 16 21:13:42.780: INFO: 99 %ile: 1.229326846s May 16 21:13:42.780: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:13:42.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3347" for this suite. • [SLOW TEST:16.622 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":13,"skipped":168,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:13:42.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:13:49.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8286" for this suite. • [SLOW TEST:7.111 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":14,"skipped":196,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:13:49.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 16 21:13:50.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5491' May 16 21:13:50.239: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 16 21:13:50.239: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created May 16 21:13:50.248: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 16 21:13:50.284: INFO: scanned /root for discovery docs: May 16 21:13:50.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5491' May 16 21:14:07.465: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 16 21:14:07.465: INFO: stdout: "Created e2e-test-httpd-rc-10ef20baea00a312acadd784e180dbd7\nScaling up e2e-test-httpd-rc-10ef20baea00a312acadd784e180dbd7 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-10ef20baea00a312acadd784e180dbd7 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-10ef20baea00a312acadd784e180dbd7 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 16 21:14:07.465: INFO: stdout: "Created e2e-test-httpd-rc-10ef20baea00a312acadd784e180dbd7\nScaling up e2e-test-httpd-rc-10ef20baea00a312acadd784e180dbd7 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-10ef20baea00a312acadd784e180dbd7 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-10ef20baea00a312acadd784e180dbd7 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 16 21:14:07.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5491' May 16 21:14:07.568: INFO: stderr: "" May 16 21:14:07.568: INFO: stdout: "e2e-test-httpd-rc-10ef20baea00a312acadd784e180dbd7-ffbbq e2e-test-httpd-rc-6zvf7 " STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2 May 16 21:14:12.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5491' May 16 21:14:12.664: INFO: stderr: "" May 16 21:14:12.664: INFO: stdout: "e2e-test-httpd-rc-10ef20baea00a312acadd784e180dbd7-ffbbq " May 16 21:14:12.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-10ef20baea00a312acadd784e180dbd7-ffbbq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5491' May 16 21:14:12.764: INFO: stderr: "" May 16 21:14:12.764: INFO: stdout: "true" May 16 21:14:12.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-10ef20baea00a312acadd784e180dbd7-ffbbq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5491' May 16 21:14:12.851: INFO: stderr: "" May 16 21:14:12.851: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 16 21:14:12.851: INFO: e2e-test-httpd-rc-10ef20baea00a312acadd784e180dbd7-ffbbq is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 16 21:14:12.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5491' May 16 21:14:12.955: INFO: stderr: "" May 16 21:14:12.955: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:14:12.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5491" for this suite. • [SLOW TEST:23.071 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":15,"skipped":215,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:14:13.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-717d1226-029f-495a-bf68-ecbcbd072f5d STEP: Creating a pod to test consume configMaps May 16 21:14:13.096: INFO: Waiting up to 5m0s for pod "pod-configmaps-8c9fd5ea-e9be-4659-a757-4adfb658a32e" in namespace "configmap-4265" to be "success or failure" May 16 21:14:13.118: INFO: Pod "pod-configmaps-8c9fd5ea-e9be-4659-a757-4adfb658a32e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.784901ms May 16 21:14:15.176: INFO: Pod "pod-configmaps-8c9fd5ea-e9be-4659-a757-4adfb658a32e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0797323s May 16 21:14:17.180: INFO: Pod "pod-configmaps-8c9fd5ea-e9be-4659-a757-4adfb658a32e": Phase="Running", Reason="", readiness=true. Elapsed: 4.084211078s May 16 21:14:19.185: INFO: Pod "pod-configmaps-8c9fd5ea-e9be-4659-a757-4adfb658a32e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.08883398s STEP: Saw pod success May 16 21:14:19.185: INFO: Pod "pod-configmaps-8c9fd5ea-e9be-4659-a757-4adfb658a32e" satisfied condition "success or failure" May 16 21:14:19.188: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-8c9fd5ea-e9be-4659-a757-4adfb658a32e container configmap-volume-test: STEP: delete the pod May 16 21:14:19.230: INFO: Waiting for pod pod-configmaps-8c9fd5ea-e9be-4659-a757-4adfb658a32e to disappear May 16 21:14:19.265: INFO: Pod pod-configmaps-8c9fd5ea-e9be-4659-a757-4adfb658a32e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:14:19.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4265" for this suite. • [SLOW TEST:6.272 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":217,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:14:19.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-1e4ba90d-8668-43d5-a167-ead1c10b7d11 STEP: Creating a pod to test consume secrets May 16 21:14:19.341: INFO: Waiting up to 5m0s for pod "pod-secrets-e598f1d9-5079-4877-923c-78b68effbc7b" in namespace "secrets-7087" to be "success or failure" May 16 21:14:19.345: INFO: Pod "pod-secrets-e598f1d9-5079-4877-923c-78b68effbc7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053242ms May 16 21:14:21.350: INFO: Pod "pod-secrets-e598f1d9-5079-4877-923c-78b68effbc7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008442132s May 16 21:14:23.354: INFO: Pod "pod-secrets-e598f1d9-5079-4877-923c-78b68effbc7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013257721s STEP: Saw pod success May 16 21:14:23.354: INFO: Pod "pod-secrets-e598f1d9-5079-4877-923c-78b68effbc7b" satisfied condition "success or failure" May 16 21:14:23.357: INFO: Trying to get logs from node jerma-worker pod pod-secrets-e598f1d9-5079-4877-923c-78b68effbc7b container secret-volume-test: STEP: delete the pod May 16 21:14:23.378: INFO: Waiting for pod pod-secrets-e598f1d9-5079-4877-923c-78b68effbc7b to disappear May 16 21:14:23.382: INFO: Pod pod-secrets-e598f1d9-5079-4877-923c-78b68effbc7b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:14:23.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7087" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":230,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:14:23.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 16 21:14:27.612: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:14:27.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9686" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":233,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:14:27.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:14:27.906: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-5788ffc5-fa81-49d7-aee1-0abaeed2ab2e" in namespace "security-context-test-1279" to be "success or failure" May 16 21:14:27.934: INFO: Pod "busybox-readonly-false-5788ffc5-fa81-49d7-aee1-0abaeed2ab2e": Phase="Pending", Reason="", readiness=false. Elapsed: 27.997715ms May 16 21:14:29.938: INFO: Pod "busybox-readonly-false-5788ffc5-fa81-49d7-aee1-0abaeed2ab2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032380678s May 16 21:14:31.943: INFO: Pod "busybox-readonly-false-5788ffc5-fa81-49d7-aee1-0abaeed2ab2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036985594s May 16 21:14:31.943: INFO: Pod "busybox-readonly-false-5788ffc5-fa81-49d7-aee1-0abaeed2ab2e" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:14:31.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1279" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:14:31.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:14:32.214: INFO: Creating deployment "test-recreate-deployment" May 16 21:14:32.218: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 16 21:14:32.238: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 16 21:14:34.378: INFO: Waiting deployment "test-recreate-deployment" to complete May 16 21:14:34.406: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260472, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260472, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260472, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260472, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 21:14:36.409: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 16 21:14:36.413: INFO: Updating deployment test-recreate-deployment May 16 21:14:36.413: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 16 21:14:37.024: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9911 /apis/apps/v1/namespaces/deployment-9911/deployments/test-recreate-deployment 738a27e0-6e14-421f-8a6e-87c45886a2ae 16730773 2 2020-05-16 21:14:32 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030599e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-16 21:14:36 +0000 UTC,LastTransitionTime:2020-05-16 21:14:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-16 21:14:36 +0000 UTC,LastTransitionTime:2020-05-16 21:14:32 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 16 21:14:37.048: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-9911 /apis/apps/v1/namespaces/deployment-9911/replicasets/test-recreate-deployment-5f94c574ff 8fe68508-77a6-492e-a8d0-6ff43f2eb500 16730772 1 2020-05-16 21:14:36 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 738a27e0-6e14-421f-8a6e-87c45886a2ae 0xc003059ef7 0xc003059ef8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003059f78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 16 21:14:37.048: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 16 21:14:37.048: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-9911 /apis/apps/v1/namespaces/deployment-9911/replicasets/test-recreate-deployment-799c574856 971f44a4-82a0-4b8a-adea-9cbf14756369 16730762 2 2020-05-16 21:14:32 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 738a27e0-6e14-421f-8a6e-87c45886a2ae 0xc000454097 0xc000454098}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000454268 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 16 21:14:37.066: INFO: Pod "test-recreate-deployment-5f94c574ff-qhrr2" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-qhrr2 test-recreate-deployment-5f94c574ff- deployment-9911 /api/v1/namespaces/deployment-9911/pods/test-recreate-deployment-5f94c574ff-qhrr2 fcda774d-029d-4369-a4a5-9629a01d95a6 16730775 0 2020-05-16 21:14:36 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 8fe68508-77a6-492e-a8d0-6ff43f2eb500 0xc000cb4227 0xc000cb4228}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mshm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mshm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mshm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:14:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:14:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:14:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:14:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-16 21:14:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:14:37.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9911" for this suite. • [SLOW TEST:5.149 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":20,"skipped":282,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:14:37.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-9090/configmap-test-c7500fc5-88ef-4d3f-8760-5a6d48702036 STEP: Creating a pod to test consume configMaps May 16 21:14:37.176: INFO: Waiting up to 5m0s for pod "pod-configmaps-b6827aa7-52bd-4ba1-8cc7-da9f214c15c2" in namespace "configmap-9090" to be "success or failure" May 16 21:14:37.488: INFO: Pod "pod-configmaps-b6827aa7-52bd-4ba1-8cc7-da9f214c15c2": Phase="Pending", Reason="", readiness=false. Elapsed: 312.438386ms May 16 21:14:39.590: INFO: Pod "pod-configmaps-b6827aa7-52bd-4ba1-8cc7-da9f214c15c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.413831194s May 16 21:14:41.593: INFO: Pod "pod-configmaps-b6827aa7-52bd-4ba1-8cc7-da9f214c15c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.417643269s May 16 21:14:43.601: INFO: Pod "pod-configmaps-b6827aa7-52bd-4ba1-8cc7-da9f214c15c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.42538242s STEP: Saw pod success May 16 21:14:43.601: INFO: Pod "pod-configmaps-b6827aa7-52bd-4ba1-8cc7-da9f214c15c2" satisfied condition "success or failure" May 16 21:14:43.604: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-b6827aa7-52bd-4ba1-8cc7-da9f214c15c2 container env-test: STEP: delete the pod May 16 21:14:43.660: INFO: Waiting for pod pod-configmaps-b6827aa7-52bd-4ba1-8cc7-da9f214c15c2 to disappear May 16 21:14:43.664: INFO: Pod pod-configmaps-b6827aa7-52bd-4ba1-8cc7-da9f214c15c2 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:14:43.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9090" for this suite. • [SLOW TEST:6.570 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":296,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:14:43.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-37af6816-8a6b-4423-ab78-3a21cedd7386 STEP: Creating secret with name s-test-opt-upd-ab640185-8a35-47c2-9776-3f2f20727a21 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-37af6816-8a6b-4423-ab78-3a21cedd7386 STEP: Updating secret s-test-opt-upd-ab640185-8a35-47c2-9776-3f2f20727a21 STEP: Creating secret with name s-test-opt-create-9f0ad983-50c7-4722-aab0-45ae4fbf0001 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:15:56.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2374" for this suite. • [SLOW TEST:72.578 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":309,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:15:56.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-5017/secret-test-8fce4002-531a-43ee-8c1b-10fbaf897349 STEP: Creating a pod to test consume secrets May 16 21:15:56.361: INFO: Waiting up to 5m0s for pod "pod-configmaps-262567e1-1ff0-433d-9c14-9b2d8fbb9ea1" in namespace "secrets-5017" to be "success or failure" May 16 21:15:56.381: INFO: Pod "pod-configmaps-262567e1-1ff0-433d-9c14-9b2d8fbb9ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.244402ms May 16 21:15:58.434: INFO: Pod "pod-configmaps-262567e1-1ff0-433d-9c14-9b2d8fbb9ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073347783s May 16 21:16:00.459: INFO: Pod "pod-configmaps-262567e1-1ff0-433d-9c14-9b2d8fbb9ea1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097562381s STEP: Saw pod success May 16 21:16:00.459: INFO: Pod "pod-configmaps-262567e1-1ff0-433d-9c14-9b2d8fbb9ea1" satisfied condition "success or failure" May 16 21:16:00.461: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-262567e1-1ff0-433d-9c14-9b2d8fbb9ea1 container env-test: STEP: delete the pod May 16 21:16:00.507: INFO: Waiting for pod pod-configmaps-262567e1-1ff0-433d-9c14-9b2d8fbb9ea1 to disappear May 16 21:16:00.539: INFO: Pod pod-configmaps-262567e1-1ff0-433d-9c14-9b2d8fbb9ea1 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:16:00.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5017" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":360,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:16:00.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:16:00.647: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 16 21:16:05.650: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 16 21:16:05.650: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 16 21:16:07.654: INFO: Creating deployment "test-rollover-deployment" May 16 21:16:07.716: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 16 21:16:09.728: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 16 21:16:09.731: INFO: Ensure that both replica sets have 1 created replica May 16 21:16:09.735: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 16 21:16:09.739: INFO: Updating deployment test-rollover-deployment May 16 21:16:09.739: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 16 21:16:11.747: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 16 21:16:11.753: INFO: Make sure deployment "test-rollover-deployment" is complete May 16 21:16:11.759: INFO: all replica sets need to contain the pod-template-hash label May 16 21:16:11.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260567, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260567, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260569, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260567, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 21:16:13.769: INFO: all replica sets need to contain the pod-template-hash label May 16 21:16:13.769: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260567, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260567, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260572, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260567, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 21:16:15.768: INFO: all replica sets need to contain the pod-template-hash label May 16 21:16:15.768: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260567, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260567, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260572, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260567, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 21:16:17.768: INFO: all replica sets need to contain the pod-template-hash label May 16 21:16:17.768: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260567, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260567, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260572, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260567, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 21:16:19.768: INFO: all replica sets need to contain the pod-template-hash label May 16 21:16:19.768: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260567, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260567, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260572, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260567, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 21:16:21.766: INFO: all replica sets need to contain the pod-template-hash label May 16 21:16:21.766: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260567, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260567, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260572, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260567, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 21:16:23.817: INFO: May 16 21:16:23.817: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 16 21:16:23.885: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-1382 /apis/apps/v1/namespaces/deployment-1382/deployments/test-rollover-deployment e8bc7178-1a36-4b34-ad56-c397f3b8dcad 16731286 2 2020-05-16 21:16:07 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003262fe8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-16 21:16:07 +0000 UTC,LastTransitionTime:2020-05-16 21:16:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-16 21:16:22 +0000 UTC,LastTransitionTime:2020-05-16 21:16:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 16 21:16:23.887: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-1382 /apis/apps/v1/namespaces/deployment-1382/replicasets/test-rollover-deployment-574d6dfbff e7280f50-202e-4aca-bad0-49242b60dd0f 16731275 2 2020-05-16 21:16:09 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment e8bc7178-1a36-4b34-ad56-c397f3b8dcad 0xc003263457 0xc003263458}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0032634c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 16 21:16:23.887: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 16 21:16:23.887: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-1382 /apis/apps/v1/namespaces/deployment-1382/replicasets/test-rollover-controller a854a794-5637-4bb0-8868-ee1570d0db10 16731284 2 2020-05-16 21:16:00 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment e8bc7178-1a36-4b34-ad56-c397f3b8dcad 0xc003263387 0xc003263388}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0032633e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 16 21:16:23.887: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-1382 /apis/apps/v1/namespaces/deployment-1382/replicasets/test-rollover-deployment-f6c94f66c 959145a0-e7a3-4823-bc11-6bbad349a6f3 16731228 2 2020-05-16 21:16:07 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment e8bc7178-1a36-4b34-ad56-c397f3b8dcad 0xc003263530 0xc003263531}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0032635a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 16 21:16:23.891: INFO: Pod "test-rollover-deployment-574d6dfbff-6r6b4" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-6r6b4 test-rollover-deployment-574d6dfbff- deployment-1382 /api/v1/namespaces/deployment-1382/pods/test-rollover-deployment-574d6dfbff-6r6b4 4ea1997b-dfe2-4653-96df-1a30e864aa06 16731243 0 2020-05-16 21:16:09 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff e7280f50-202e-4aca-bad0-49242b60dd0f 0xc003263ae7 0xc003263ae8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9gp6w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9gp6w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9gp6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:16:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:16:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:16:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:16:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.60,StartTime:2020-05-16 21:16:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 21:16:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://2d07a839b265046cc47242d8eb02d799dcea54423e3e490b6d8af73f19cae018,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.60,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:16:23.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1382" for this suite. • [SLOW TEST:23.352 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":24,"skipped":367,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:16:23.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 16 21:16:23.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2261 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 16 21:16:27.103: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0516 21:16:27.024698 233 log.go:172] (0xc000a8c790) (0xc0006e00a0) Create stream\nI0516 21:16:27.024758 233 log.go:172] (0xc000a8c790) (0xc0006e00a0) Stream added, broadcasting: 1\nI0516 21:16:27.027262 233 log.go:172] (0xc000a8c790) Reply frame received for 1\nI0516 21:16:27.027325 233 log.go:172] (0xc000a8c790) (0xc0006e0140) Create stream\nI0516 21:16:27.027343 233 log.go:172] (0xc000a8c790) (0xc0006e0140) Stream added, broadcasting: 3\nI0516 21:16:27.028324 233 log.go:172] (0xc000a8c790) Reply frame received for 3\nI0516 21:16:27.028376 233 log.go:172] (0xc000a8c790) (0xc0006bbb80) Create stream\nI0516 21:16:27.028395 233 log.go:172] (0xc000a8c790) (0xc0006bbb80) Stream added, broadcasting: 5\nI0516 21:16:27.029625 233 log.go:172] (0xc000a8c790) Reply frame received for 5\nI0516 21:16:27.029663 233 log.go:172] (0xc000a8c790) (0xc0006bbc20) Create stream\nI0516 21:16:27.029676 233 log.go:172] (0xc000a8c790) (0xc0006bbc20) Stream added, broadcasting: 7\nI0516 21:16:27.030481 233 log.go:172] (0xc000a8c790) Reply frame received for 7\nI0516 21:16:27.030652 233 log.go:172] (0xc0006e0140) (3) Writing data frame\nI0516 21:16:27.030771 233 log.go:172] (0xc0006e0140) (3) Writing data frame\nI0516 21:16:27.031727 233 log.go:172] (0xc000a8c790) Data frame received for 5\nI0516 21:16:27.031758 233 log.go:172] (0xc0006bbb80) (5) Data frame handling\nI0516 21:16:27.031792 233 log.go:172] (0xc0006bbb80) (5) Data frame sent\nI0516 21:16:27.032312 233 log.go:172] (0xc000a8c790) Data frame received for 5\nI0516 21:16:27.032337 233 log.go:172] (0xc0006bbb80) (5) Data frame handling\nI0516 21:16:27.032354 233 log.go:172] (0xc0006bbb80) (5) Data frame sent\nI0516 21:16:27.076146 233 log.go:172] (0xc000a8c790) Data frame received for 5\nI0516 21:16:27.076188 233 log.go:172] (0xc0006bbb80) (5) Data frame handling\nI0516 21:16:27.076209 233 log.go:172] (0xc000a8c790) Data frame received for 7\nI0516 21:16:27.076223 233 log.go:172] (0xc0006bbc20) (7) Data frame handling\nI0516 21:16:27.076638 233 log.go:172] (0xc000a8c790) Data frame received for 1\nI0516 21:16:27.076681 233 log.go:172] (0xc0006e00a0) (1) Data frame handling\nI0516 21:16:27.076716 233 log.go:172] (0xc0006e00a0) (1) Data frame sent\nI0516 21:16:27.076824 233 log.go:172] (0xc000a8c790) (0xc0006e00a0) Stream removed, broadcasting: 1\nI0516 21:16:27.077577 233 log.go:172] (0xc000a8c790) (0xc0006e00a0) Stream removed, broadcasting: 1\nI0516 21:16:27.077601 233 log.go:172] (0xc000a8c790) (0xc0006e0140) Stream removed, broadcasting: 3\nI0516 21:16:27.077629 233 log.go:172] (0xc000a8c790) (0xc0006bbb80) Stream removed, broadcasting: 5\nI0516 21:16:27.077875 233 log.go:172] (0xc000a8c790) (0xc0006bbc20) Stream removed, broadcasting: 7\n" May 16 21:16:27.103: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:16:29.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2261" for this suite. • [SLOW TEST:5.296 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":25,"skipped":382,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:16:29.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 16 21:16:29.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8513' May 16 21:16:30.610: INFO: stderr: "" May 16 21:16:30.610: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 16 21:16:30.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8513' May 16 21:16:30.730: INFO: stderr: "" May 16 21:16:30.730: INFO: stdout: "update-demo-nautilus-7nmrr update-demo-nautilus-872sz " May 16 21:16:30.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7nmrr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8513' May 16 21:16:30.846: INFO: stderr: "" May 16 21:16:30.846: INFO: stdout: "" May 16 21:16:30.846: INFO: update-demo-nautilus-7nmrr is created but not running May 16 21:16:35.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8513' May 16 21:16:35.960: INFO: stderr: "" May 16 21:16:35.960: INFO: stdout: "update-demo-nautilus-7nmrr update-demo-nautilus-872sz " May 16 21:16:35.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7nmrr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8513' May 16 21:16:36.053: INFO: stderr: "" May 16 21:16:36.053: INFO: stdout: "true" May 16 21:16:36.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7nmrr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8513' May 16 21:16:36.135: INFO: stderr: "" May 16 21:16:36.135: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 21:16:36.135: INFO: validating pod update-demo-nautilus-7nmrr May 16 21:16:36.146: INFO: got data: { "image": "nautilus.jpg" } May 16 21:16:36.146: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 21:16:36.146: INFO: update-demo-nautilus-7nmrr is verified up and running May 16 21:16:36.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-872sz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8513' May 16 21:16:36.247: INFO: stderr: "" May 16 21:16:36.247: INFO: stdout: "true" May 16 21:16:36.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-872sz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8513' May 16 21:16:36.348: INFO: stderr: "" May 16 21:16:36.348: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 21:16:36.348: INFO: validating pod update-demo-nautilus-872sz May 16 21:16:36.353: INFO: got data: { "image": "nautilus.jpg" } May 16 21:16:36.353: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 21:16:36.353: INFO: update-demo-nautilus-872sz is verified up and running STEP: using delete to clean up resources May 16 21:16:36.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8513' May 16 21:16:36.449: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 21:16:36.449: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 16 21:16:36.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8513' May 16 21:16:36.549: INFO: stderr: "No resources found in kubectl-8513 namespace.\n" May 16 21:16:36.549: INFO: stdout: "" May 16 21:16:36.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8513 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 16 21:16:36.651: INFO: stderr: "" May 16 21:16:36.651: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:16:36.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8513" for this suite. • [SLOW TEST:7.464 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":26,"skipped":386,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:16:36.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 16 21:16:36.743: INFO: >>> kubeConfig: /root/.kube/config May 16 21:16:38.692: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:16:49.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9003" for this suite. • [SLOW TEST:13.086 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":27,"skipped":394,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:16:49.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-cc1f973d-7d0f-4f89-9bc5-69594576b076 STEP: Creating a pod to test consume configMaps May 16 21:16:49.794: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bc83e6c4-5518-4983-95a7-02d9e8d6b52f" in namespace "projected-7741" to be "success or failure" May 16 21:16:49.809: INFO: Pod "pod-projected-configmaps-bc83e6c4-5518-4983-95a7-02d9e8d6b52f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.821249ms May 16 21:16:51.928: INFO: Pod "pod-projected-configmaps-bc83e6c4-5518-4983-95a7-02d9e8d6b52f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133466384s May 16 21:16:53.933: INFO: Pod "pod-projected-configmaps-bc83e6c4-5518-4983-95a7-02d9e8d6b52f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.138110835s STEP: Saw pod success May 16 21:16:53.933: INFO: Pod "pod-projected-configmaps-bc83e6c4-5518-4983-95a7-02d9e8d6b52f" satisfied condition "success or failure" May 16 21:16:53.937: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-bc83e6c4-5518-4983-95a7-02d9e8d6b52f container projected-configmap-volume-test: STEP: delete the pod May 16 21:16:54.021: INFO: Waiting for pod pod-projected-configmaps-bc83e6c4-5518-4983-95a7-02d9e8d6b52f to disappear May 16 21:16:54.026: INFO: Pod pod-projected-configmaps-bc83e6c4-5518-4983-95a7-02d9e8d6b52f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:16:54.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7741" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":408,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:16:54.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:17:22.245: INFO: Container started at 2020-05-16 21:16:56 +0000 UTC, pod became ready at 2020-05-16 21:17:20 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:17:22.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4573" for this suite. • [SLOW TEST:28.217 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":442,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:17:22.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container May 16 21:17:26.844: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6800 pod-service-account-08c5b0be-6b4e-4c51-9c4a-f87089e19f84 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 16 21:17:27.094: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6800 pod-service-account-08c5b0be-6b4e-4c51-9c4a-f87089e19f84 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 16 21:17:27.318: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6800 pod-service-account-08c5b0be-6b4e-4c51-9c4a-f87089e19f84 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:17:27.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6800" for this suite. • [SLOW TEST:5.296 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":30,"skipped":446,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:17:27.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:17:27.946: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"da68393a-46eb-4104-af24-ebb4205ddeae", Controller:(*bool)(0xc003ee018a), BlockOwnerDeletion:(*bool)(0xc003ee018b)}} May 16 21:17:27.955: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"bb6252bf-7cbf-4a2a-9150-06d9693c9e67", Controller:(*bool)(0xc003f32222), BlockOwnerDeletion:(*bool)(0xc003f32223)}} May 16 21:17:27.998: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"92e90cf3-46f9-4b28-9d8c-9d217e7a34bd", Controller:(*bool)(0xc003fc05da), BlockOwnerDeletion:(*bool)(0xc003fc05db)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:17:33.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-934" for this suite. • [SLOW TEST:5.556 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":31,"skipped":454,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:17:33.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8800 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 16 21:17:33.230: INFO: Found 0 stateful pods, waiting for 3 May 16 21:17:43.234: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 16 21:17:43.234: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 16 21:17:43.234: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 16 21:17:53.235: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 16 21:17:53.235: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 16 21:17:53.235: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 16 21:17:53.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8800 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 21:17:53.524: INFO: stderr: "I0516 21:17:53.387213 547 log.go:172] (0xc000524790) (0xc0006c3ea0) Create stream\nI0516 21:17:53.387257 547 log.go:172] (0xc000524790) (0xc0006c3ea0) Stream added, broadcasting: 1\nI0516 21:17:53.389465 547 log.go:172] (0xc000524790) Reply frame received for 1\nI0516 21:17:53.389492 547 log.go:172] (0xc000524790) (0xc00091a000) Create stream\nI0516 21:17:53.389499 547 log.go:172] (0xc000524790) (0xc00091a000) Stream added, broadcasting: 3\nI0516 21:17:53.390586 547 log.go:172] (0xc000524790) Reply frame received for 3\nI0516 21:17:53.390612 547 log.go:172] (0xc000524790) (0xc00063e780) Create stream\nI0516 21:17:53.390626 547 log.go:172] (0xc000524790) (0xc00063e780) Stream added, broadcasting: 5\nI0516 21:17:53.391485 547 log.go:172] (0xc000524790) Reply frame received for 5\nI0516 21:17:53.479450 547 log.go:172] (0xc000524790) Data frame received for 5\nI0516 21:17:53.479484 547 log.go:172] (0xc00063e780) (5) Data frame handling\nI0516 21:17:53.479505 547 log.go:172] (0xc00063e780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 21:17:53.515213 547 log.go:172] (0xc000524790) Data frame received for 3\nI0516 21:17:53.515265 547 log.go:172] (0xc00091a000) (3) Data frame handling\nI0516 21:17:53.515306 547 log.go:172] (0xc00091a000) (3) Data frame sent\nI0516 21:17:53.515481 547 log.go:172] (0xc000524790) Data frame received for 3\nI0516 21:17:53.515533 547 log.go:172] (0xc00091a000) (3) Data frame handling\nI0516 21:17:53.515573 547 log.go:172] (0xc000524790) Data frame received for 5\nI0516 21:17:53.515685 547 log.go:172] (0xc00063e780) (5) Data frame handling\nI0516 21:17:53.517971 547 log.go:172] (0xc000524790) Data frame received for 1\nI0516 21:17:53.518005 547 log.go:172] (0xc0006c3ea0) (1) Data frame handling\nI0516 21:17:53.518028 547 log.go:172] (0xc0006c3ea0) (1) Data frame sent\nI0516 21:17:53.518092 547 log.go:172] (0xc000524790) (0xc0006c3ea0) Stream removed, broadcasting: 1\nI0516 21:17:53.518138 547 log.go:172] (0xc000524790) Go away received\nI0516 21:17:53.518652 547 log.go:172] (0xc000524790) (0xc0006c3ea0) Stream removed, broadcasting: 1\nI0516 21:17:53.518671 547 log.go:172] (0xc000524790) (0xc00091a000) Stream removed, broadcasting: 3\nI0516 21:17:53.518682 547 log.go:172] (0xc000524790) (0xc00063e780) Stream removed, broadcasting: 5\n" May 16 21:17:53.524: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 21:17:53.524: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 16 21:18:03.556: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 16 21:18:13.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8800 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 21:18:13.833: INFO: stderr: "I0516 21:18:13.723081 567 log.go:172] (0xc0009e0000) (0xc000a9c000) Create stream\nI0516 21:18:13.723139 567 log.go:172] (0xc0009e0000) (0xc000a9c000) Stream added, broadcasting: 1\nI0516 21:18:13.726328 567 log.go:172] (0xc0009e0000) Reply frame received for 1\nI0516 21:18:13.726363 567 log.go:172] (0xc0009e0000) (0xc000a74000) Create stream\nI0516 21:18:13.726378 567 log.go:172] (0xc0009e0000) (0xc000a74000) Stream added, broadcasting: 3\nI0516 21:18:13.727150 567 log.go:172] (0xc0009e0000) Reply frame received for 3\nI0516 21:18:13.727198 567 log.go:172] (0xc0009e0000) (0xc000ad0000) Create stream\nI0516 21:18:13.727219 567 log.go:172] (0xc0009e0000) (0xc000ad0000) Stream added, broadcasting: 5\nI0516 21:18:13.728215 567 log.go:172] (0xc0009e0000) Reply frame received for 5\nI0516 21:18:13.826029 567 log.go:172] (0xc0009e0000) Data frame received for 5\nI0516 21:18:13.826080 567 log.go:172] (0xc000ad0000) (5) Data frame handling\nI0516 21:18:13.826125 567 log.go:172] (0xc000ad0000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0516 21:18:13.826191 567 log.go:172] (0xc0009e0000) Data frame received for 3\nI0516 21:18:13.826221 567 log.go:172] (0xc000a74000) (3) Data frame handling\nI0516 21:18:13.826243 567 log.go:172] (0xc000a74000) (3) Data frame sent\nI0516 21:18:13.826268 567 log.go:172] (0xc0009e0000) Data frame received for 3\nI0516 21:18:13.826284 567 log.go:172] (0xc000a74000) (3) Data frame handling\nI0516 21:18:13.826391 567 log.go:172] (0xc0009e0000) Data frame received for 5\nI0516 21:18:13.826423 567 log.go:172] (0xc000ad0000) (5) Data frame handling\nI0516 21:18:13.827632 567 log.go:172] (0xc0009e0000) Data frame received for 1\nI0516 21:18:13.827654 567 log.go:172] (0xc000a9c000) (1) Data frame handling\nI0516 21:18:13.827672 567 log.go:172] (0xc000a9c000) (1) Data frame sent\nI0516 21:18:13.827773 567 log.go:172] (0xc0009e0000) (0xc000a9c000) Stream removed, broadcasting: 1\nI0516 21:18:13.828076 567 log.go:172] (0xc0009e0000) Go away received\nI0516 21:18:13.828258 567 log.go:172] (0xc0009e0000) (0xc000a9c000) Stream removed, broadcasting: 1\nI0516 21:18:13.828290 567 log.go:172] (0xc0009e0000) (0xc000a74000) Stream removed, broadcasting: 3\nI0516 21:18:13.828313 567 log.go:172] (0xc0009e0000) (0xc000ad0000) Stream removed, broadcasting: 5\n" May 16 21:18:13.833: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 21:18:13.833: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 21:18:33.854: INFO: Waiting for StatefulSet statefulset-8800/ss2 to complete update May 16 21:18:33.854: INFO: Waiting for Pod statefulset-8800/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 16 21:18:43.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8800 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 21:18:44.106: INFO: stderr: "I0516 21:18:43.984578 587 log.go:172] (0xc0009df550) (0xc0009a2780) Create stream\nI0516 21:18:43.984624 587 log.go:172] (0xc0009df550) (0xc0009a2780) Stream added, broadcasting: 1\nI0516 21:18:43.989055 587 log.go:172] (0xc0009df550) Reply frame received for 1\nI0516 21:18:43.989097 587 log.go:172] (0xc0009df550) (0xc0006426e0) Create stream\nI0516 21:18:43.989108 587 log.go:172] (0xc0009df550) (0xc0006426e0) Stream added, broadcasting: 3\nI0516 21:18:43.990022 587 log.go:172] (0xc0009df550) Reply frame received for 3\nI0516 21:18:43.990048 587 log.go:172] (0xc0009df550) (0xc0004314a0) Create stream\nI0516 21:18:43.990054 587 log.go:172] (0xc0009df550) (0xc0004314a0) Stream added, broadcasting: 5\nI0516 21:18:43.991151 587 log.go:172] (0xc0009df550) Reply frame received for 5\nI0516 21:18:44.067684 587 log.go:172] (0xc0009df550) Data frame received for 5\nI0516 21:18:44.067714 587 log.go:172] (0xc0004314a0) (5) Data frame handling\nI0516 21:18:44.067732 587 log.go:172] (0xc0004314a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 21:18:44.099601 587 log.go:172] (0xc0009df550) Data frame received for 3\nI0516 21:18:44.099623 587 log.go:172] (0xc0006426e0) (3) Data frame handling\nI0516 21:18:44.099635 587 log.go:172] (0xc0006426e0) (3) Data frame sent\nI0516 21:18:44.099640 587 log.go:172] (0xc0009df550) Data frame received for 3\nI0516 21:18:44.099645 587 log.go:172] (0xc0006426e0) (3) Data frame handling\nI0516 21:18:44.099707 587 log.go:172] (0xc0009df550) Data frame received for 5\nI0516 21:18:44.099730 587 log.go:172] (0xc0004314a0) (5) Data frame handling\nI0516 21:18:44.101646 587 log.go:172] (0xc0009df550) Data frame received for 1\nI0516 21:18:44.101678 587 log.go:172] (0xc0009a2780) (1) Data frame handling\nI0516 21:18:44.101696 587 log.go:172] (0xc0009a2780) (1) Data frame sent\nI0516 21:18:44.101711 587 log.go:172] (0xc0009df550) (0xc0009a2780) Stream removed, broadcasting: 1\nI0516 21:18:44.101743 587 log.go:172] (0xc0009df550) Go away received\nI0516 21:18:44.102022 587 log.go:172] (0xc0009df550) (0xc0009a2780) Stream removed, broadcasting: 1\nI0516 21:18:44.102041 587 log.go:172] (0xc0009df550) (0xc0006426e0) Stream removed, broadcasting: 3\nI0516 21:18:44.102052 587 log.go:172] (0xc0009df550) (0xc0004314a0) Stream removed, broadcasting: 5\n" May 16 21:18:44.106: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 21:18:44.106: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 21:18:54.138: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 16 21:19:04.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8800 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 21:19:04.403: INFO: stderr: "I0516 21:19:04.304421 607 log.go:172] (0xc0005e66e0) (0xc000906140) Create stream\nI0516 21:19:04.304489 607 log.go:172] (0xc0005e66e0) (0xc000906140) Stream added, broadcasting: 1\nI0516 21:19:04.307388 607 log.go:172] (0xc0005e66e0) Reply frame received for 1\nI0516 21:19:04.307442 607 log.go:172] (0xc0005e66e0) (0xc000663ae0) Create stream\nI0516 21:19:04.307468 607 log.go:172] (0xc0005e66e0) (0xc000663ae0) Stream added, broadcasting: 3\nI0516 21:19:04.308316 607 log.go:172] (0xc0005e66e0) Reply frame received for 3\nI0516 21:19:04.308356 607 log.go:172] (0xc0005e66e0) (0xc0004434a0) Create stream\nI0516 21:19:04.308372 607 log.go:172] (0xc0005e66e0) (0xc0004434a0) Stream added, broadcasting: 5\nI0516 21:19:04.309406 607 log.go:172] (0xc0005e66e0) Reply frame received for 5\nI0516 21:19:04.395616 607 log.go:172] (0xc0005e66e0) Data frame received for 5\nI0516 21:19:04.395667 607 log.go:172] (0xc0004434a0) (5) Data frame handling\nI0516 21:19:04.395684 607 log.go:172] (0xc0004434a0) (5) Data frame sent\nI0516 21:19:04.395696 607 log.go:172] (0xc0005e66e0) Data frame received for 5\nI0516 21:19:04.395706 607 log.go:172] (0xc0004434a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0516 21:19:04.395743 607 log.go:172] (0xc0005e66e0) Data frame received for 3\nI0516 21:19:04.395770 607 log.go:172] (0xc000663ae0) (3) Data frame handling\nI0516 21:19:04.395791 607 log.go:172] (0xc000663ae0) (3) Data frame sent\nI0516 21:19:04.395802 607 log.go:172] (0xc0005e66e0) Data frame received for 3\nI0516 21:19:04.395810 607 log.go:172] (0xc000663ae0) (3) Data frame handling\nI0516 21:19:04.397660 607 log.go:172] (0xc0005e66e0) Data frame received for 1\nI0516 21:19:04.397686 607 log.go:172] (0xc000906140) (1) Data frame handling\nI0516 21:19:04.397699 607 log.go:172] (0xc000906140) (1) Data frame sent\nI0516 21:19:04.397717 607 log.go:172] (0xc0005e66e0) (0xc000906140) Stream removed, broadcasting: 1\nI0516 21:19:04.397827 607 log.go:172] (0xc0005e66e0) Go away received\nI0516 21:19:04.398147 607 log.go:172] (0xc0005e66e0) (0xc000906140) Stream removed, broadcasting: 1\nI0516 21:19:04.398165 607 log.go:172] (0xc0005e66e0) (0xc000663ae0) Stream removed, broadcasting: 3\nI0516 21:19:04.398173 607 log.go:172] (0xc0005e66e0) (0xc0004434a0) Stream removed, broadcasting: 5\n" May 16 21:19:04.403: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 21:19:04.403: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 21:19:14.424: INFO: Waiting for StatefulSet statefulset-8800/ss2 to complete update May 16 21:19:14.424: INFO: Waiting for Pod statefulset-8800/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 16 21:19:14.424: INFO: Waiting for Pod statefulset-8800/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 16 21:19:14.424: INFO: Waiting for Pod statefulset-8800/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 16 21:19:24.431: INFO: Waiting for StatefulSet statefulset-8800/ss2 to complete update May 16 21:19:24.431: INFO: Waiting for Pod statefulset-8800/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 16 21:19:24.431: INFO: Waiting for Pod statefulset-8800/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 16 21:19:34.432: INFO: Waiting for StatefulSet statefulset-8800/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 16 21:19:44.432: INFO: Deleting all statefulset in ns statefulset-8800 May 16 21:19:44.435: INFO: Scaling statefulset ss2 to 0 May 16 21:20:04.454: INFO: Waiting for statefulset status.replicas updated to 0 May 16 21:20:04.457: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:20:04.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8800" for this suite. • [SLOW TEST:151.375 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":32,"skipped":474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:20:04.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-1f8eecc6-5a37-4a32-91a0-bf3d41f734b3 STEP: Creating secret with name secret-projected-all-test-volume-9db0a7aa-ce66-4013-a077-97444364a189 STEP: Creating a pod to test Check all projections for projected volume plugin May 16 21:20:04.608: INFO: Waiting up to 5m0s for pod "projected-volume-c89bb9c1-6fc8-439e-876a-61f8be80a013" in namespace "projected-2186" to be "success or failure" May 16 21:20:04.612: INFO: Pod "projected-volume-c89bb9c1-6fc8-439e-876a-61f8be80a013": Phase="Pending", Reason="", readiness=false. Elapsed: 3.778538ms May 16 21:20:06.654: INFO: Pod "projected-volume-c89bb9c1-6fc8-439e-876a-61f8be80a013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046063522s May 16 21:20:08.756: INFO: Pod "projected-volume-c89bb9c1-6fc8-439e-876a-61f8be80a013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.147917335s STEP: Saw pod success May 16 21:20:08.756: INFO: Pod "projected-volume-c89bb9c1-6fc8-439e-876a-61f8be80a013" satisfied condition "success or failure" May 16 21:20:08.759: INFO: Trying to get logs from node jerma-worker pod projected-volume-c89bb9c1-6fc8-439e-876a-61f8be80a013 container projected-all-volume-test: STEP: delete the pod May 16 21:20:08.792: INFO: Waiting for pod projected-volume-c89bb9c1-6fc8-439e-876a-61f8be80a013 to disappear May 16 21:20:08.850: INFO: Pod projected-volume-c89bb9c1-6fc8-439e-876a-61f8be80a013 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:20:08.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2186" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":33,"skipped":508,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:20:08.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 16 21:20:09.100: INFO: Waiting up to 5m0s for pod "var-expansion-492d3765-c040-4d52-8367-c78cb6ad653a" in namespace "var-expansion-553" to be "success or failure" May 16 21:20:09.108: INFO: Pod "var-expansion-492d3765-c040-4d52-8367-c78cb6ad653a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.985133ms May 16 21:20:11.112: INFO: Pod "var-expansion-492d3765-c040-4d52-8367-c78cb6ad653a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011920654s May 16 21:20:13.116: INFO: Pod "var-expansion-492d3765-c040-4d52-8367-c78cb6ad653a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015844872s STEP: Saw pod success May 16 21:20:13.116: INFO: Pod "var-expansion-492d3765-c040-4d52-8367-c78cb6ad653a" satisfied condition "success or failure" May 16 21:20:13.118: INFO: Trying to get logs from node jerma-worker pod var-expansion-492d3765-c040-4d52-8367-c78cb6ad653a container dapi-container: STEP: delete the pod May 16 21:20:13.133: INFO: Waiting for pod var-expansion-492d3765-c040-4d52-8367-c78cb6ad653a to disappear May 16 21:20:13.140: INFO: Pod var-expansion-492d3765-c040-4d52-8367-c78cb6ad653a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:20:13.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-553" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":527,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:20:13.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0516 21:20:24.699526 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 16 21:20:24.699: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:20:24.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1301" for this suite. • [SLOW TEST:11.671 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":35,"skipped":528,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:20:24.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 16 21:20:25.129: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 16 21:20:25.186: INFO: Waiting for terminating namespaces to be deleted... May 16 21:20:25.188: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 16 21:20:25.192: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 21:20:25.192: INFO: Container kindnet-cni ready: true, restart count 0 May 16 21:20:25.192: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 21:20:25.192: INFO: Container kube-proxy ready: true, restart count 0 May 16 21:20:25.192: INFO: simpletest-rc-to-be-deleted-ffz88 from gc-1301 started at 2020-05-16 21:20:13 +0000 UTC (1 container statuses recorded) May 16 21:20:25.192: INFO: Container nginx ready: true, restart count 0 May 16 21:20:25.192: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 16 21:20:25.204: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 16 21:20:25.204: INFO: Container kube-bench ready: false, restart count 0 May 16 21:20:25.204: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 21:20:25.204: INFO: Container kindnet-cni ready: true, restart count 0 May 16 21:20:25.204: INFO: simpletest-rc-to-be-deleted-g8zcr from gc-1301 started at 2020-05-16 21:20:13 +0000 UTC (1 container statuses recorded) May 16 21:20:25.204: INFO: Container nginx ready: true, restart count 0 May 16 21:20:25.204: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 21:20:25.204: INFO: Container kube-proxy ready: true, restart count 0 May 16 21:20:25.204: INFO: simpletest-rc-to-be-deleted-69s9j from gc-1301 started at 2020-05-16 21:20:13 +0000 UTC (1 container statuses recorded) May 16 21:20:25.204: INFO: Container nginx ready: true, restart count 0 May 16 21:20:25.204: INFO: simpletest-rc-to-be-deleted-gmh9j from gc-1301 started at 2020-05-16 21:20:13 +0000 UTC (1 container statuses recorded) May 16 21:20:25.204: INFO: Container nginx ready: true, restart count 0 May 16 21:20:25.204: INFO: simpletest-rc-to-be-deleted-4x5w9 from gc-1301 started at 2020-05-16 21:20:13 +0000 UTC (1 container statuses recorded) May 16 21:20:25.204: INFO: Container nginx ready: true, restart count 0 May 16 21:20:25.204: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 16 21:20:25.204: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 16 21:20:25.306: INFO: Pod simpletest-rc-to-be-deleted-4x5w9 requesting resource cpu=0m on Node jerma-worker2 May 16 21:20:25.306: INFO: Pod simpletest-rc-to-be-deleted-69s9j requesting resource cpu=0m on Node jerma-worker2 May 16 21:20:25.306: INFO: Pod simpletest-rc-to-be-deleted-ffz88 requesting resource cpu=0m on Node jerma-worker May 16 21:20:25.306: INFO: Pod simpletest-rc-to-be-deleted-g8zcr requesting resource cpu=0m on Node jerma-worker2 May 16 21:20:25.306: INFO: Pod simpletest-rc-to-be-deleted-gmh9j requesting resource cpu=0m on Node jerma-worker2 May 16 21:20:25.306: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 16 21:20:25.306: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 16 21:20:25.306: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 16 21:20:25.306: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 16 21:20:25.306: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker May 16 21:20:25.321: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-b5af0936-23df-4c6c-b212-3067b93ae0f0.160f9f0235bec6e5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-451/filler-pod-b5af0936-23df-4c6c-b212-3067b93ae0f0 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-b5af0936-23df-4c6c-b212-3067b93ae0f0.160f9f02855040eb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-b5af0936-23df-4c6c-b212-3067b93ae0f0.160f9f02cc713de3], Reason = [Created], Message = [Created container filler-pod-b5af0936-23df-4c6c-b212-3067b93ae0f0] STEP: Considering event: Type = [Normal], Name = [filler-pod-b5af0936-23df-4c6c-b212-3067b93ae0f0.160f9f02e9ce4df0], Reason = [Started], Message = [Started container filler-pod-b5af0936-23df-4c6c-b212-3067b93ae0f0] STEP: Considering event: Type = [Normal], Name = [filler-pod-baab4547-e310-4add-b9db-4f60aba7db95.160f9f023a000071], Reason = [Scheduled], Message = [Successfully assigned sched-pred-451/filler-pod-baab4547-e310-4add-b9db-4f60aba7db95 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-baab4547-e310-4add-b9db-4f60aba7db95.160f9f02d13dd679], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-baab4547-e310-4add-b9db-4f60aba7db95.160f9f0304f41d5c], Reason = [Created], Message = [Created container filler-pod-baab4547-e310-4add-b9db-4f60aba7db95] STEP: Considering event: Type = [Normal], Name = [filler-pod-baab4547-e310-4add-b9db-4f60aba7db95.160f9f0317c92312], Reason = [Started], Message = [Started container filler-pod-baab4547-e310-4add-b9db-4f60aba7db95] STEP: Considering event: Type = [Warning], Name = [additional-pod.160f9f03a393c66b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.160f9f03abc8cb75], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:20:32.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-451" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.840 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":36,"skipped":539,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:20:32.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 21:20:33.701: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 21:20:35.710: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260833, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260833, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260834, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260833, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 21:20:38.752: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:20:49.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8490" for this suite. STEP: Destroying namespace "webhook-8490-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.690 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":37,"skipped":544,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:20:49.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 16 21:20:49.420: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 16 21:20:49.434: INFO: Waiting for terminating namespaces to be deleted... May 16 21:20:49.436: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 16 21:20:49.440: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 21:20:49.440: INFO: Container kindnet-cni ready: true, restart count 0 May 16 21:20:49.440: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 21:20:49.440: INFO: Container kube-proxy ready: true, restart count 0 May 16 21:20:49.440: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 16 21:20:49.445: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 21:20:49.445: INFO: Container kube-proxy ready: true, restart count 0 May 16 21:20:49.445: INFO: sample-webhook-deployment-5f65f8c764-697ht from webhook-8490 started at 2020-05-16 21:20:34 +0000 UTC (1 container statuses recorded) May 16 21:20:49.445: INFO: Container sample-webhook ready: true, restart count 0 May 16 21:20:49.445: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 16 21:20:49.445: INFO: Container kube-hunter ready: false, restart count 0 May 16 21:20:49.445: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 21:20:49.445: INFO: Container kindnet-cni ready: true, restart count 0 May 16 21:20:49.445: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 16 21:20:49.445: INFO: Container kube-bench ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-dcbc2725-ace6-4007-bb83-76db712a2a4d 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-dcbc2725-ace6-4007-bb83-76db712a2a4d off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-dcbc2725-ace6-4007-bb83-76db712a2a4d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:21:05.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4975" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.306 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":38,"skipped":551,"failed":0} S ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:21:05.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:21:23.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7644" for this suite. • [SLOW TEST:18.122 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":39,"skipped":552,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:21:23.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 16 21:21:24.005: INFO: Waiting up to 5m0s for pod "downwardapi-volume-184d72a2-07cf-4b4b-97cb-22d63ab68120" in namespace "downward-api-5761" to be "success or failure" May 16 21:21:24.021: INFO: Pod "downwardapi-volume-184d72a2-07cf-4b4b-97cb-22d63ab68120": Phase="Pending", Reason="", readiness=false. Elapsed: 16.357027ms May 16 21:21:26.392: INFO: Pod "downwardapi-volume-184d72a2-07cf-4b4b-97cb-22d63ab68120": Phase="Pending", Reason="", readiness=false. Elapsed: 2.387006183s May 16 21:21:28.396: INFO: Pod "downwardapi-volume-184d72a2-07cf-4b4b-97cb-22d63ab68120": Phase="Running", Reason="", readiness=true. Elapsed: 4.390894243s May 16 21:21:30.401: INFO: Pod "downwardapi-volume-184d72a2-07cf-4b4b-97cb-22d63ab68120": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.395822613s STEP: Saw pod success May 16 21:21:30.401: INFO: Pod "downwardapi-volume-184d72a2-07cf-4b4b-97cb-22d63ab68120" satisfied condition "success or failure" May 16 21:21:30.404: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-184d72a2-07cf-4b4b-97cb-22d63ab68120 container client-container: STEP: delete the pod May 16 21:21:30.465: INFO: Waiting for pod downwardapi-volume-184d72a2-07cf-4b4b-97cb-22d63ab68120 to disappear May 16 21:21:30.477: INFO: Pod downwardapi-volume-184d72a2-07cf-4b4b-97cb-22d63ab68120 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:21:30.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5761" for this suite. • [SLOW TEST:6.707 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":599,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:21:30.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:21:34.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5429" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":609,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:21:34.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 16 21:21:34.743: INFO: Waiting up to 5m0s for pod "downwardapi-volume-323a055f-c0e3-491b-b894-8b643afc92f8" in namespace "downward-api-2131" to be "success or failure" May 16 21:21:34.793: INFO: Pod "downwardapi-volume-323a055f-c0e3-491b-b894-8b643afc92f8": Phase="Pending", Reason="", readiness=false. Elapsed: 50.420552ms May 16 21:21:36.811: INFO: Pod "downwardapi-volume-323a055f-c0e3-491b-b894-8b643afc92f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067666669s May 16 21:21:38.815: INFO: Pod "downwardapi-volume-323a055f-c0e3-491b-b894-8b643afc92f8": Phase="Running", Reason="", readiness=true. Elapsed: 4.072173916s May 16 21:21:40.819: INFO: Pod "downwardapi-volume-323a055f-c0e3-491b-b894-8b643afc92f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.076027482s STEP: Saw pod success May 16 21:21:40.819: INFO: Pod "downwardapi-volume-323a055f-c0e3-491b-b894-8b643afc92f8" satisfied condition "success or failure" May 16 21:21:40.821: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-323a055f-c0e3-491b-b894-8b643afc92f8 container client-container: STEP: delete the pod May 16 21:21:40.860: INFO: Waiting for pod downwardapi-volume-323a055f-c0e3-491b-b894-8b643afc92f8 to disappear May 16 21:21:40.878: INFO: Pod downwardapi-volume-323a055f-c0e3-491b-b894-8b643afc92f8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:21:40.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2131" for this suite. • [SLOW TEST:6.208 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":629,"failed":0} [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:21:40.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:21:45.099: INFO: Waiting up to 5m0s for pod "client-envvars-ecdd9625-e1c5-4d35-8aff-3e0854dd97d6" in namespace "pods-1867" to be "success or failure" May 16 21:21:45.105: INFO: Pod "client-envvars-ecdd9625-e1c5-4d35-8aff-3e0854dd97d6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.058642ms May 16 21:21:47.109: INFO: Pod "client-envvars-ecdd9625-e1c5-4d35-8aff-3e0854dd97d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009801632s May 16 21:21:49.114: INFO: Pod "client-envvars-ecdd9625-e1c5-4d35-8aff-3e0854dd97d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014460025s STEP: Saw pod success May 16 21:21:49.114: INFO: Pod "client-envvars-ecdd9625-e1c5-4d35-8aff-3e0854dd97d6" satisfied condition "success or failure" May 16 21:21:49.118: INFO: Trying to get logs from node jerma-worker pod client-envvars-ecdd9625-e1c5-4d35-8aff-3e0854dd97d6 container env3cont: STEP: delete the pod May 16 21:21:49.388: INFO: Waiting for pod client-envvars-ecdd9625-e1c5-4d35-8aff-3e0854dd97d6 to disappear May 16 21:21:49.394: INFO: Pod client-envvars-ecdd9625-e1c5-4d35-8aff-3e0854dd97d6 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:21:49.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1867" for this suite. • [SLOW TEST:8.511 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":629,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:21:49.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 16 21:21:49.474: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8b865a9-f868-4360-a879-0f1a041732be" in namespace "projected-8167" to be "success or failure" May 16 21:21:49.478: INFO: Pod "downwardapi-volume-a8b865a9-f868-4360-a879-0f1a041732be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079419ms May 16 21:21:51.650: INFO: Pod "downwardapi-volume-a8b865a9-f868-4360-a879-0f1a041732be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176049363s May 16 21:21:53.654: INFO: Pod "downwardapi-volume-a8b865a9-f868-4360-a879-0f1a041732be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.180333042s STEP: Saw pod success May 16 21:21:53.654: INFO: Pod "downwardapi-volume-a8b865a9-f868-4360-a879-0f1a041732be" satisfied condition "success or failure" May 16 21:21:53.657: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a8b865a9-f868-4360-a879-0f1a041732be container client-container: STEP: delete the pod May 16 21:21:53.690: INFO: Waiting for pod downwardapi-volume-a8b865a9-f868-4360-a879-0f1a041732be to disappear May 16 21:21:53.706: INFO: Pod downwardapi-volume-a8b865a9-f868-4360-a879-0f1a041732be no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:21:53.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8167" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":630,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:21:53.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 16 21:21:53.798: INFO: Waiting up to 5m0s for pod "pod-1672d60c-a735-4bad-bffb-8013a81c8417" in namespace "emptydir-9674" to be "success or failure" May 16 21:21:53.802: INFO: Pod "pod-1672d60c-a735-4bad-bffb-8013a81c8417": Phase="Pending", Reason="", readiness=false. Elapsed: 4.415284ms May 16 21:21:55.848: INFO: Pod "pod-1672d60c-a735-4bad-bffb-8013a81c8417": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050170481s May 16 21:21:57.868: INFO: Pod "pod-1672d60c-a735-4bad-bffb-8013a81c8417": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070812127s STEP: Saw pod success May 16 21:21:57.868: INFO: Pod "pod-1672d60c-a735-4bad-bffb-8013a81c8417" satisfied condition "success or failure" May 16 21:21:57.880: INFO: Trying to get logs from node jerma-worker2 pod pod-1672d60c-a735-4bad-bffb-8013a81c8417 container test-container: STEP: delete the pod May 16 21:21:57.957: INFO: Waiting for pod pod-1672d60c-a735-4bad-bffb-8013a81c8417 to disappear May 16 21:21:58.188: INFO: Pod pod-1672d60c-a735-4bad-bffb-8013a81c8417 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:21:58.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9674" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":639,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:21:58.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-d68e8835-6115-4c50-8c48-d1070421ded2 STEP: Creating configMap with name cm-test-opt-upd-a69cb5d6-d765-47ae-90bc-388a467bddd0 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d68e8835-6115-4c50-8c48-d1070421ded2 STEP: Updating configmap cm-test-opt-upd-a69cb5d6-d765-47ae-90bc-388a467bddd0 STEP: Creating configMap with name cm-test-opt-create-18ccf4d2-cd70-4ca4-ab78-3b5d0e9818c8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:22:08.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1511" for this suite. • [SLOW TEST:10.236 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":649,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:22:08.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 16 21:22:18.580: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 21:22:18.587: INFO: Pod pod-with-prestop-http-hook still exists May 16 21:22:20.587: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 21:22:20.591: INFO: Pod pod-with-prestop-http-hook still exists May 16 21:22:22.587: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 21:22:22.591: INFO: Pod pod-with-prestop-http-hook still exists May 16 21:22:24.587: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 21:22:24.591: INFO: Pod pod-with-prestop-http-hook still exists May 16 21:22:26.587: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 21:22:26.591: INFO: Pod pod-with-prestop-http-hook still exists May 16 21:22:28.587: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 21:22:28.592: INFO: Pod pod-with-prestop-http-hook still exists May 16 21:22:30.587: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 21:22:30.592: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:22:30.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6783" for this suite. • [SLOW TEST:22.156 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":654,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:22:30.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 21:22:31.231: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 21:22:33.369: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260951, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260951, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260951, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725260951, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 21:22:36.429: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:22:36.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4835" for this suite. STEP: Destroying namespace "webhook-4835-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.364 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":48,"skipped":672,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:22:36.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:22:53.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-806" for this suite. • [SLOW TEST:16.247 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":49,"skipped":685,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:22:53.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 16 21:22:53.341: INFO: Waiting up to 5m0s for pod "pod-de49b3ad-1f9f-4b00-a105-7af4d8ef49ce" in namespace "emptydir-1755" to be "success or failure" May 16 21:22:53.366: INFO: Pod "pod-de49b3ad-1f9f-4b00-a105-7af4d8ef49ce": Phase="Pending", Reason="", readiness=false. Elapsed: 24.2261ms May 16 21:22:55.369: INFO: Pod "pod-de49b3ad-1f9f-4b00-a105-7af4d8ef49ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027999363s May 16 21:22:57.373: INFO: Pod "pod-de49b3ad-1f9f-4b00-a105-7af4d8ef49ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031120571s STEP: Saw pod success May 16 21:22:57.373: INFO: Pod "pod-de49b3ad-1f9f-4b00-a105-7af4d8ef49ce" satisfied condition "success or failure" May 16 21:22:57.374: INFO: Trying to get logs from node jerma-worker pod pod-de49b3ad-1f9f-4b00-a105-7af4d8ef49ce container test-container: STEP: delete the pod May 16 21:22:57.438: INFO: Waiting for pod pod-de49b3ad-1f9f-4b00-a105-7af4d8ef49ce to disappear May 16 21:22:57.788: INFO: Pod pod-de49b3ad-1f9f-4b00-a105-7af4d8ef49ce no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:22:57.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1755" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":695,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:22:57.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:22:57.889: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:23:01.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1291" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":717,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:23:01.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 16 21:23:02.066: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 16 21:23:02.078: INFO: Waiting for terminating namespaces to be deleted... May 16 21:23:02.081: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 16 21:23:02.086: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 21:23:02.086: INFO: Container kindnet-cni ready: true, restart count 0 May 16 21:23:02.086: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 21:23:02.086: INFO: Container kube-proxy ready: true, restart count 0 May 16 21:23:02.086: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 16 21:23:02.092: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 21:23:02.092: INFO: Container kindnet-cni ready: true, restart count 0 May 16 21:23:02.092: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 16 21:23:02.092: INFO: Container kube-bench ready: false, restart count 0 May 16 21:23:02.092: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 21:23:02.092: INFO: Container kube-proxy ready: true, restart count 0 May 16 21:23:02.092: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 16 21:23:02.092: INFO: Container kube-hunter ready: false, restart count 0 May 16 21:23:02.092: INFO: pod-logs-websocket-9e9c8ed9-50a7-4da9-826e-58d90eddc3a8 from pods-1291 started at 2020-05-16 21:22:57 +0000 UTC (1 container statuses recorded) May 16 21:23:02.092: INFO: Container main ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-8890e992-b40f-4c1e-b863-f868e22151a9 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-8890e992-b40f-4c1e-b863-f868e22151a9 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-8890e992-b40f-4c1e-b863-f868e22151a9 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:28:10.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3164" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.297 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":52,"skipped":731,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:28:10.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-8d8fb3a3-3e6c-44a8-95a5-ed1f0a2ff001 STEP: Creating a pod to test consume configMaps May 16 21:28:10.354: INFO: Waiting up to 5m0s for pod "pod-configmaps-a443cfe8-4fe0-4a7d-9885-92d5949f76f7" in namespace "configmap-2862" to be "success or failure" May 16 21:28:10.357: INFO: Pod "pod-configmaps-a443cfe8-4fe0-4a7d-9885-92d5949f76f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.971781ms May 16 21:28:12.361: INFO: Pod "pod-configmaps-a443cfe8-4fe0-4a7d-9885-92d5949f76f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007373578s May 16 21:28:14.366: INFO: Pod "pod-configmaps-a443cfe8-4fe0-4a7d-9885-92d5949f76f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011789154s STEP: Saw pod success May 16 21:28:14.366: INFO: Pod "pod-configmaps-a443cfe8-4fe0-4a7d-9885-92d5949f76f7" satisfied condition "success or failure" May 16 21:28:14.369: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-a443cfe8-4fe0-4a7d-9885-92d5949f76f7 container configmap-volume-test: STEP: delete the pod May 16 21:28:14.399: INFO: Waiting for pod pod-configmaps-a443cfe8-4fe0-4a7d-9885-92d5949f76f7 to disappear May 16 21:28:14.430: INFO: Pod pod-configmaps-a443cfe8-4fe0-4a7d-9885-92d5949f76f7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:28:14.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2862" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":732,"failed":0} ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:28:14.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:28:14.532: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 16 21:28:14.549: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:14.566: INFO: Number of nodes with available pods: 0 May 16 21:28:14.566: INFO: Node jerma-worker is running more than one daemon pod May 16 21:28:15.570: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:15.572: INFO: Number of nodes with available pods: 0 May 16 21:28:15.572: INFO: Node jerma-worker is running more than one daemon pod May 16 21:28:16.570: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:16.751: INFO: Number of nodes with available pods: 0 May 16 21:28:16.751: INFO: Node jerma-worker is running more than one daemon pod May 16 21:28:17.570: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:17.572: INFO: Number of nodes with available pods: 0 May 16 21:28:17.572: INFO: Node jerma-worker is running more than one daemon pod May 16 21:28:18.572: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:18.576: INFO: Number of nodes with available pods: 1 May 16 21:28:18.576: INFO: Node jerma-worker is running more than one daemon pod May 16 21:28:19.570: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:19.577: INFO: Number of nodes with available pods: 2 May 16 21:28:19.577: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 16 21:28:19.698: INFO: Wrong image for pod: daemon-set-4p5wr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 16 21:28:19.698: INFO: Wrong image for pod: daemon-set-vhhlg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 16 21:28:19.704: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:20.708: INFO: Wrong image for pod: daemon-set-4p5wr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 16 21:28:20.708: INFO: Wrong image for pod: daemon-set-vhhlg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 16 21:28:20.711: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:21.830: INFO: Wrong image for pod: daemon-set-4p5wr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 16 21:28:21.830: INFO: Wrong image for pod: daemon-set-vhhlg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 16 21:28:21.833: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:22.708: INFO: Wrong image for pod: daemon-set-4p5wr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 16 21:28:22.708: INFO: Wrong image for pod: daemon-set-vhhlg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 16 21:28:22.711: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:23.708: INFO: Wrong image for pod: daemon-set-4p5wr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 16 21:28:23.708: INFO: Pod daemon-set-4p5wr is not available May 16 21:28:23.708: INFO: Wrong image for pod: daemon-set-vhhlg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 16 21:28:23.710: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:24.708: INFO: Pod daemon-set-bllsw is not available May 16 21:28:24.708: INFO: Wrong image for pod: daemon-set-vhhlg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 16 21:28:24.712: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:25.709: INFO: Pod daemon-set-bllsw is not available May 16 21:28:25.709: INFO: Wrong image for pod: daemon-set-vhhlg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 16 21:28:25.712: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:26.709: INFO: Pod daemon-set-bllsw is not available May 16 21:28:26.709: INFO: Wrong image for pod: daemon-set-vhhlg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 16 21:28:26.714: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:27.727: INFO: Wrong image for pod: daemon-set-vhhlg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 16 21:28:27.732: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:28.708: INFO: Wrong image for pod: daemon-set-vhhlg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 16 21:28:28.708: INFO: Pod daemon-set-vhhlg is not available May 16 21:28:28.712: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:29.707: INFO: Pod daemon-set-2hhhw is not available May 16 21:28:29.710: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 16 21:28:29.714: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:29.716: INFO: Number of nodes with available pods: 1 May 16 21:28:29.716: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:28:30.721: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:30.724: INFO: Number of nodes with available pods: 1 May 16 21:28:30.724: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:28:31.758: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:31.762: INFO: Number of nodes with available pods: 1 May 16 21:28:31.762: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:28:32.722: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:28:32.725: INFO: Number of nodes with available pods: 2 May 16 21:28:32.725: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4004, will wait for the garbage collector to delete the pods May 16 21:28:32.799: INFO: Deleting DaemonSet.extensions daemon-set took: 5.768774ms May 16 21:28:33.099: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.283671ms May 16 21:28:39.502: INFO: Number of nodes with available pods: 0 May 16 21:28:39.502: INFO: Number of running nodes: 0, number of available pods: 0 May 16 21:28:39.505: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4004/daemonsets","resourceVersion":"16735187"},"items":null} May 16 21:28:39.508: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4004/pods","resourceVersion":"16735187"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:28:39.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4004" for this suite. • [SLOW TEST:25.113 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":54,"skipped":732,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:28:39.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1954 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1954 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1954 May 16 21:28:39.631: INFO: Found 0 stateful pods, waiting for 1 May 16 21:28:49.650: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 16 21:28:49.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1954 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 21:28:52.959: INFO: stderr: "I0516 21:28:52.818758 628 log.go:172] (0xc0004b4790) (0xc0007963c0) Create stream\nI0516 21:28:52.818801 628 log.go:172] (0xc0004b4790) (0xc0007963c0) Stream added, broadcasting: 1\nI0516 21:28:52.821046 628 log.go:172] (0xc0004b4790) Reply frame received for 1\nI0516 21:28:52.821094 628 log.go:172] (0xc0004b4790) (0xc0006c2000) Create stream\nI0516 21:28:52.821283 628 log.go:172] (0xc0004b4790) (0xc0006c2000) Stream added, broadcasting: 3\nI0516 21:28:52.822336 628 log.go:172] (0xc0004b4790) Reply frame received for 3\nI0516 21:28:52.822375 628 log.go:172] (0xc0004b4790) (0xc0006c20a0) Create stream\nI0516 21:28:52.822389 628 log.go:172] (0xc0004b4790) (0xc0006c20a0) Stream added, broadcasting: 5\nI0516 21:28:52.823230 628 log.go:172] (0xc0004b4790) Reply frame received for 5\nI0516 21:28:52.916825 628 log.go:172] (0xc0004b4790) Data frame received for 5\nI0516 21:28:52.916854 628 log.go:172] (0xc0006c20a0) (5) Data frame handling\nI0516 21:28:52.916867 628 log.go:172] (0xc0006c20a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 21:28:52.950600 628 log.go:172] (0xc0004b4790) Data frame received for 5\nI0516 21:28:52.950631 628 log.go:172] (0xc0006c20a0) (5) Data frame handling\nI0516 21:28:52.950651 628 log.go:172] (0xc0004b4790) Data frame received for 3\nI0516 21:28:52.950658 628 log.go:172] (0xc0006c2000) (3) Data frame handling\nI0516 21:28:52.950672 628 log.go:172] (0xc0006c2000) (3) Data frame sent\nI0516 21:28:52.950681 628 log.go:172] (0xc0004b4790) Data frame received for 3\nI0516 21:28:52.950703 628 log.go:172] (0xc0006c2000) (3) Data frame handling\nI0516 21:28:52.952712 628 log.go:172] (0xc0004b4790) Data frame received for 1\nI0516 21:28:52.952740 628 log.go:172] (0xc0007963c0) (1) Data frame handling\nI0516 21:28:52.952758 628 log.go:172] (0xc0007963c0) (1) Data frame sent\nI0516 21:28:52.952766 628 log.go:172] (0xc0004b4790) (0xc0007963c0) Stream removed, broadcasting: 1\nI0516 21:28:52.952775 628 log.go:172] (0xc0004b4790) Go away received\nI0516 21:28:52.953489 628 log.go:172] (0xc0004b4790) (0xc0007963c0) Stream removed, broadcasting: 1\nI0516 21:28:52.953519 628 log.go:172] (0xc0004b4790) (0xc0006c2000) Stream removed, broadcasting: 3\nI0516 21:28:52.953537 628 log.go:172] (0xc0004b4790) (0xc0006c20a0) Stream removed, broadcasting: 5\n" May 16 21:28:52.959: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 21:28:52.959: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 21:28:52.962: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 16 21:29:02.967: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 16 21:29:02.967: INFO: Waiting for statefulset status.replicas updated to 0 May 16 21:29:02.980: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999615s May 16 21:29:03.984: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994443596s May 16 21:29:04.988: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990549895s May 16 21:29:05.992: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.986860908s May 16 21:29:06.997: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.982567326s May 16 21:29:08.015: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.97766851s May 16 21:29:09.019: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.959295924s May 16 21:29:10.023: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.955448134s May 16 21:29:11.028: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.951273867s May 16 21:29:12.033: INFO: Verifying statefulset ss doesn't scale past 1 for another 946.347414ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1954 May 16 21:29:13.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1954 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 21:29:13.288: INFO: stderr: "I0516 21:29:13.174481 652 log.go:172] (0xc0000f4c60) (0xc000a4a140) Create stream\nI0516 21:29:13.174540 652 log.go:172] (0xc0000f4c60) (0xc000a4a140) Stream added, broadcasting: 1\nI0516 21:29:13.177334 652 log.go:172] (0xc0000f4c60) Reply frame received for 1\nI0516 21:29:13.177391 652 log.go:172] (0xc0000f4c60) (0xc00064fa40) Create stream\nI0516 21:29:13.177406 652 log.go:172] (0xc0000f4c60) (0xc00064fa40) Stream added, broadcasting: 3\nI0516 21:29:13.178593 652 log.go:172] (0xc0000f4c60) Reply frame received for 3\nI0516 21:29:13.178647 652 log.go:172] (0xc0000f4c60) (0xc00064fc20) Create stream\nI0516 21:29:13.178664 652 log.go:172] (0xc0000f4c60) (0xc00064fc20) Stream added, broadcasting: 5\nI0516 21:29:13.179858 652 log.go:172] (0xc0000f4c60) Reply frame received for 5\nI0516 21:29:13.280942 652 log.go:172] (0xc0000f4c60) Data frame received for 5\nI0516 21:29:13.280972 652 log.go:172] (0xc00064fc20) (5) Data frame handling\nI0516 21:29:13.280982 652 log.go:172] (0xc00064fc20) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0516 21:29:13.281024 652 log.go:172] (0xc0000f4c60) Data frame received for 3\nI0516 21:29:13.281093 652 log.go:172] (0xc00064fa40) (3) Data frame handling\nI0516 21:29:13.281259 652 log.go:172] (0xc0000f4c60) Data frame received for 5\nI0516 21:29:13.281278 652 log.go:172] (0xc00064fc20) (5) Data frame handling\nI0516 21:29:13.281347 652 log.go:172] (0xc00064fa40) (3) Data frame sent\nI0516 21:29:13.281379 652 log.go:172] (0xc0000f4c60) Data frame received for 3\nI0516 21:29:13.281399 652 log.go:172] (0xc00064fa40) (3) Data frame handling\nI0516 21:29:13.282909 652 log.go:172] (0xc0000f4c60) Data frame received for 1\nI0516 21:29:13.282928 652 log.go:172] (0xc000a4a140) (1) Data frame handling\nI0516 21:29:13.282939 652 log.go:172] (0xc000a4a140) (1) Data frame sent\nI0516 21:29:13.283082 652 log.go:172] (0xc0000f4c60) (0xc000a4a140) Stream removed, broadcasting: 1\nI0516 21:29:13.283118 652 log.go:172] (0xc0000f4c60) Go away received\nI0516 21:29:13.283399 652 log.go:172] (0xc0000f4c60) (0xc000a4a140) Stream removed, broadcasting: 1\nI0516 21:29:13.283413 652 log.go:172] (0xc0000f4c60) (0xc00064fa40) Stream removed, broadcasting: 3\nI0516 21:29:13.283419 652 log.go:172] (0xc0000f4c60) (0xc00064fc20) Stream removed, broadcasting: 5\n" May 16 21:29:13.288: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 21:29:13.288: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 21:29:13.305: INFO: Found 1 stateful pods, waiting for 3 May 16 21:29:23.310: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 16 21:29:23.310: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 16 21:29:23.310: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 16 21:29:23.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1954 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 21:29:23.529: INFO: stderr: "I0516 21:29:23.447530 675 log.go:172] (0xc000b9a000) (0xc00093a000) Create stream\nI0516 21:29:23.447615 675 log.go:172] (0xc000b9a000) (0xc00093a000) Stream added, broadcasting: 1\nI0516 21:29:23.449468 675 log.go:172] (0xc000b9a000) Reply frame received for 1\nI0516 21:29:23.449505 675 log.go:172] (0xc000b9a000) (0xc000517540) Create stream\nI0516 21:29:23.449517 675 log.go:172] (0xc000b9a000) (0xc000517540) Stream added, broadcasting: 3\nI0516 21:29:23.450334 675 log.go:172] (0xc000b9a000) Reply frame received for 3\nI0516 21:29:23.450377 675 log.go:172] (0xc000b9a000) (0xc000afc000) Create stream\nI0516 21:29:23.450400 675 log.go:172] (0xc000b9a000) (0xc000afc000) Stream added, broadcasting: 5\nI0516 21:29:23.451260 675 log.go:172] (0xc000b9a000) Reply frame received for 5\nI0516 21:29:23.521987 675 log.go:172] (0xc000b9a000) Data frame received for 3\nI0516 21:29:23.522024 675 log.go:172] (0xc000517540) (3) Data frame handling\nI0516 21:29:23.522051 675 log.go:172] (0xc000517540) (3) Data frame sent\nI0516 21:29:23.522360 675 log.go:172] (0xc000b9a000) Data frame received for 3\nI0516 21:29:23.522396 675 log.go:172] (0xc000517540) (3) Data frame handling\nI0516 21:29:23.522423 675 log.go:172] (0xc000b9a000) Data frame received for 5\nI0516 21:29:23.522436 675 log.go:172] (0xc000afc000) (5) Data frame handling\nI0516 21:29:23.522450 675 log.go:172] (0xc000afc000) (5) Data frame sent\nI0516 21:29:23.522470 675 log.go:172] (0xc000b9a000) Data frame received for 5\nI0516 21:29:23.522480 675 log.go:172] (0xc000afc000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 21:29:23.523902 675 log.go:172] (0xc000b9a000) Data frame received for 1\nI0516 21:29:23.523994 675 log.go:172] (0xc00093a000) (1) Data frame handling\nI0516 21:29:23.524060 675 log.go:172] (0xc00093a000) (1) Data frame sent\nI0516 21:29:23.524087 675 log.go:172] (0xc000b9a000) (0xc00093a000) Stream removed, broadcasting: 1\nI0516 21:29:23.524108 675 log.go:172] (0xc000b9a000) Go away received\nI0516 21:29:23.524573 675 log.go:172] (0xc000b9a000) (0xc00093a000) Stream removed, broadcasting: 1\nI0516 21:29:23.524597 675 log.go:172] (0xc000b9a000) (0xc000517540) Stream removed, broadcasting: 3\nI0516 21:29:23.524612 675 log.go:172] (0xc000b9a000) (0xc000afc000) Stream removed, broadcasting: 5\n" May 16 21:29:23.529: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 21:29:23.529: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 21:29:23.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1954 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 21:29:23.765: INFO: stderr: "I0516 21:29:23.642861 697 log.go:172] (0xc000a1aa50) (0xc0007c20a0) Create stream\nI0516 21:29:23.642915 697 log.go:172] (0xc000a1aa50) (0xc0007c20a0) Stream added, broadcasting: 1\nI0516 21:29:23.644994 697 log.go:172] (0xc000a1aa50) Reply frame received for 1\nI0516 21:29:23.645021 697 log.go:172] (0xc000a1aa50) (0xc0007c2140) Create stream\nI0516 21:29:23.645029 697 log.go:172] (0xc000a1aa50) (0xc0007c2140) Stream added, broadcasting: 3\nI0516 21:29:23.646331 697 log.go:172] (0xc000a1aa50) Reply frame received for 3\nI0516 21:29:23.646359 697 log.go:172] (0xc000a1aa50) (0xc000633a40) Create stream\nI0516 21:29:23.646378 697 log.go:172] (0xc000a1aa50) (0xc000633a40) Stream added, broadcasting: 5\nI0516 21:29:23.647223 697 log.go:172] (0xc000a1aa50) Reply frame received for 5\nI0516 21:29:23.707245 697 log.go:172] (0xc000a1aa50) Data frame received for 5\nI0516 21:29:23.707274 697 log.go:172] (0xc000633a40) (5) Data frame handling\nI0516 21:29:23.707296 697 log.go:172] (0xc000633a40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 21:29:23.756701 697 log.go:172] (0xc000a1aa50) Data frame received for 3\nI0516 21:29:23.756749 697 log.go:172] (0xc0007c2140) (3) Data frame handling\nI0516 21:29:23.756771 697 log.go:172] (0xc0007c2140) (3) Data frame sent\nI0516 21:29:23.756787 697 log.go:172] (0xc000a1aa50) Data frame received for 3\nI0516 21:29:23.756815 697 log.go:172] (0xc0007c2140) (3) Data frame handling\nI0516 21:29:23.756967 697 log.go:172] (0xc000a1aa50) Data frame received for 5\nI0516 21:29:23.756989 697 log.go:172] (0xc000633a40) (5) Data frame handling\nI0516 21:29:23.759719 697 log.go:172] (0xc000a1aa50) Data frame received for 1\nI0516 21:29:23.759753 697 log.go:172] (0xc0007c20a0) (1) Data frame handling\nI0516 21:29:23.759771 697 log.go:172] (0xc0007c20a0) (1) Data frame sent\nI0516 21:29:23.759805 697 log.go:172] (0xc000a1aa50) (0xc0007c20a0) Stream removed, broadcasting: 1\nI0516 21:29:23.759904 697 log.go:172] (0xc000a1aa50) Go away received\nI0516 21:29:23.760234 697 log.go:172] (0xc000a1aa50) (0xc0007c20a0) Stream removed, broadcasting: 1\nI0516 21:29:23.760255 697 log.go:172] (0xc000a1aa50) (0xc0007c2140) Stream removed, broadcasting: 3\nI0516 21:29:23.760265 697 log.go:172] (0xc000a1aa50) (0xc000633a40) Stream removed, broadcasting: 5\n" May 16 21:29:23.765: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 21:29:23.765: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 21:29:23.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1954 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 21:29:24.010: INFO: stderr: "I0516 21:29:23.895961 716 log.go:172] (0xc0000f5600) (0xc0005fbd60) Create stream\nI0516 21:29:23.896030 716 log.go:172] (0xc0000f5600) (0xc0005fbd60) Stream added, broadcasting: 1\nI0516 21:29:23.898803 716 log.go:172] (0xc0000f5600) Reply frame received for 1\nI0516 21:29:23.898841 716 log.go:172] (0xc0000f5600) (0xc00052e640) Create stream\nI0516 21:29:23.898854 716 log.go:172] (0xc0000f5600) (0xc00052e640) Stream added, broadcasting: 3\nI0516 21:29:23.899726 716 log.go:172] (0xc0000f5600) Reply frame received for 3\nI0516 21:29:23.899782 716 log.go:172] (0xc0000f5600) (0xc0007b5400) Create stream\nI0516 21:29:23.899815 716 log.go:172] (0xc0000f5600) (0xc0007b5400) Stream added, broadcasting: 5\nI0516 21:29:23.900732 716 log.go:172] (0xc0000f5600) Reply frame received for 5\nI0516 21:29:23.977850 716 log.go:172] (0xc0000f5600) Data frame received for 5\nI0516 21:29:23.977879 716 log.go:172] (0xc0007b5400) (5) Data frame handling\nI0516 21:29:23.977898 716 log.go:172] (0xc0007b5400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 21:29:24.002292 716 log.go:172] (0xc0000f5600) Data frame received for 3\nI0516 21:29:24.002337 716 log.go:172] (0xc00052e640) (3) Data frame handling\nI0516 21:29:24.002366 716 log.go:172] (0xc00052e640) (3) Data frame sent\nI0516 21:29:24.002378 716 log.go:172] (0xc0000f5600) Data frame received for 3\nI0516 21:29:24.002390 716 log.go:172] (0xc00052e640) (3) Data frame handling\nI0516 21:29:24.002429 716 log.go:172] (0xc0000f5600) Data frame received for 5\nI0516 21:29:24.002453 716 log.go:172] (0xc0007b5400) (5) Data frame handling\nI0516 21:29:24.004472 716 log.go:172] (0xc0000f5600) Data frame received for 1\nI0516 21:29:24.004582 716 log.go:172] (0xc0005fbd60) (1) Data frame handling\nI0516 21:29:24.004614 716 log.go:172] (0xc0005fbd60) (1) Data frame sent\nI0516 21:29:24.004647 716 log.go:172] (0xc0000f5600) (0xc0005fbd60) Stream removed, broadcasting: 1\nI0516 21:29:24.004668 716 log.go:172] (0xc0000f5600) Go away received\nI0516 21:29:24.005353 716 log.go:172] (0xc0000f5600) (0xc0005fbd60) Stream removed, broadcasting: 1\nI0516 21:29:24.005375 716 log.go:172] (0xc0000f5600) (0xc00052e640) Stream removed, broadcasting: 3\nI0516 21:29:24.005384 716 log.go:172] (0xc0000f5600) (0xc0007b5400) Stream removed, broadcasting: 5\n" May 16 21:29:24.010: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 21:29:24.010: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 21:29:24.010: INFO: Waiting for statefulset status.replicas updated to 0 May 16 21:29:24.016: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 16 21:29:34.024: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 16 21:29:34.024: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 16 21:29:34.024: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 16 21:29:34.035: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999575s May 16 21:29:35.043: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994336161s May 16 21:29:36.047: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986506303s May 16 21:29:37.078: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982028032s May 16 21:29:38.101: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.951093098s May 16 21:29:39.112: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.928611379s May 16 21:29:40.119: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.915416961s May 16 21:29:41.154: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.910289525s May 16 21:29:42.159: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.874785508s May 16 21:29:43.163: INFO: Verifying statefulset ss doesn't scale past 3 for another 869.95119ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1954 May 16 21:29:44.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1954 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 21:29:44.423: INFO: stderr: "I0516 21:29:44.311493 737 log.go:172] (0xc0009a84d0) (0xc000958140) Create stream\nI0516 21:29:44.311939 737 log.go:172] (0xc0009a84d0) (0xc000958140) Stream added, broadcasting: 1\nI0516 21:29:44.315046 737 log.go:172] (0xc0009a84d0) Reply frame received for 1\nI0516 21:29:44.315091 737 log.go:172] (0xc0009a84d0) (0xc000649ae0) Create stream\nI0516 21:29:44.315107 737 log.go:172] (0xc0009a84d0) (0xc000649ae0) Stream added, broadcasting: 3\nI0516 21:29:44.316083 737 log.go:172] (0xc0009a84d0) Reply frame received for 3\nI0516 21:29:44.316112 737 log.go:172] (0xc0009a84d0) (0xc000649b80) Create stream\nI0516 21:29:44.316122 737 log.go:172] (0xc0009a84d0) (0xc000649b80) Stream added, broadcasting: 5\nI0516 21:29:44.317244 737 log.go:172] (0xc0009a84d0) Reply frame received for 5\nI0516 21:29:44.415277 737 log.go:172] (0xc0009a84d0) Data frame received for 5\nI0516 21:29:44.415335 737 log.go:172] (0xc000649b80) (5) Data frame handling\nI0516 21:29:44.415363 737 log.go:172] (0xc000649b80) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0516 21:29:44.415386 737 log.go:172] (0xc0009a84d0) Data frame received for 3\nI0516 21:29:44.415398 737 log.go:172] (0xc000649ae0) (3) Data frame handling\nI0516 21:29:44.415412 737 log.go:172] (0xc000649ae0) (3) Data frame sent\nI0516 21:29:44.415426 737 log.go:172] (0xc0009a84d0) Data frame received for 3\nI0516 21:29:44.415446 737 log.go:172] (0xc000649ae0) (3) Data frame handling\nI0516 21:29:44.415553 737 log.go:172] (0xc0009a84d0) Data frame received for 5\nI0516 21:29:44.415586 737 log.go:172] (0xc000649b80) (5) Data frame handling\nI0516 21:29:44.417042 737 log.go:172] (0xc0009a84d0) Data frame received for 1\nI0516 21:29:44.417069 737 log.go:172] (0xc000958140) (1) Data frame handling\nI0516 21:29:44.417098 737 log.go:172] (0xc000958140) (1) Data frame sent\nI0516 21:29:44.417279 737 log.go:172] (0xc0009a84d0) (0xc000958140) Stream removed, broadcasting: 1\nI0516 21:29:44.417312 737 log.go:172] (0xc0009a84d0) Go away received\nI0516 21:29:44.417734 737 log.go:172] (0xc0009a84d0) (0xc000958140) Stream removed, broadcasting: 1\nI0516 21:29:44.417764 737 log.go:172] (0xc0009a84d0) (0xc000649ae0) Stream removed, broadcasting: 3\nI0516 21:29:44.417776 737 log.go:172] (0xc0009a84d0) (0xc000649b80) Stream removed, broadcasting: 5\n" May 16 21:29:44.423: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 21:29:44.423: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 21:29:44.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1954 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 21:29:44.628: INFO: stderr: "I0516 21:29:44.551903 757 log.go:172] (0xc0001182c0) (0xc0008c2000) Create stream\nI0516 21:29:44.551976 757 log.go:172] (0xc0001182c0) (0xc0008c2000) Stream added, broadcasting: 1\nI0516 21:29:44.554799 757 log.go:172] (0xc0001182c0) Reply frame received for 1\nI0516 21:29:44.554839 757 log.go:172] (0xc0001182c0) (0xc0008c20a0) Create stream\nI0516 21:29:44.554850 757 log.go:172] (0xc0001182c0) (0xc0008c20a0) Stream added, broadcasting: 3\nI0516 21:29:44.555898 757 log.go:172] (0xc0001182c0) Reply frame received for 3\nI0516 21:29:44.555952 757 log.go:172] (0xc0001182c0) (0xc0002c14a0) Create stream\nI0516 21:29:44.555972 757 log.go:172] (0xc0001182c0) (0xc0002c14a0) Stream added, broadcasting: 5\nI0516 21:29:44.557366 757 log.go:172] (0xc0001182c0) Reply frame received for 5\nI0516 21:29:44.621883 757 log.go:172] (0xc0001182c0) Data frame received for 5\nI0516 21:29:44.621919 757 log.go:172] (0xc0002c14a0) (5) Data frame handling\nI0516 21:29:44.621934 757 log.go:172] (0xc0002c14a0) (5) Data frame sent\nI0516 21:29:44.621946 757 log.go:172] (0xc0001182c0) Data frame received for 5\nI0516 21:29:44.621956 757 log.go:172] (0xc0002c14a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0516 21:29:44.621982 757 log.go:172] (0xc0001182c0) Data frame received for 3\nI0516 21:29:44.621995 757 log.go:172] (0xc0008c20a0) (3) Data frame handling\nI0516 21:29:44.622012 757 log.go:172] (0xc0008c20a0) (3) Data frame sent\nI0516 21:29:44.622021 757 log.go:172] (0xc0001182c0) Data frame received for 3\nI0516 21:29:44.622032 757 log.go:172] (0xc0008c20a0) (3) Data frame handling\nI0516 21:29:44.622926 757 log.go:172] (0xc0001182c0) Data frame received for 1\nI0516 21:29:44.622958 757 log.go:172] (0xc0008c2000) (1) Data frame handling\nI0516 21:29:44.622991 757 log.go:172] (0xc0008c2000) (1) Data frame sent\nI0516 21:29:44.623018 757 log.go:172] (0xc0001182c0) (0xc0008c2000) Stream removed, broadcasting: 1\nI0516 21:29:44.623046 757 log.go:172] (0xc0001182c0) Go away received\nI0516 21:29:44.623407 757 log.go:172] (0xc0001182c0) (0xc0008c2000) Stream removed, broadcasting: 1\nI0516 21:29:44.623426 757 log.go:172] (0xc0001182c0) (0xc0008c20a0) Stream removed, broadcasting: 3\nI0516 21:29:44.623434 757 log.go:172] (0xc0001182c0) (0xc0002c14a0) Stream removed, broadcasting: 5\n" May 16 21:29:44.628: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 21:29:44.628: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 21:29:44.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1954 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 21:29:44.939: INFO: stderr: "I0516 21:29:44.750703 778 log.go:172] (0xc000583760) (0xc000ae0320) Create stream\nI0516 21:29:44.750755 778 log.go:172] (0xc000583760) (0xc000ae0320) Stream added, broadcasting: 1\nI0516 21:29:44.753530 778 log.go:172] (0xc000583760) Reply frame received for 1\nI0516 21:29:44.753591 778 log.go:172] (0xc000583760) (0xc000ba00a0) Create stream\nI0516 21:29:44.753610 778 log.go:172] (0xc000583760) (0xc000ba00a0) Stream added, broadcasting: 3\nI0516 21:29:44.754773 778 log.go:172] (0xc000583760) Reply frame received for 3\nI0516 21:29:44.754825 778 log.go:172] (0xc000583760) (0xc000ba0140) Create stream\nI0516 21:29:44.754848 778 log.go:172] (0xc000583760) (0xc000ba0140) Stream added, broadcasting: 5\nI0516 21:29:44.756010 778 log.go:172] (0xc000583760) Reply frame received for 5\nI0516 21:29:44.934274 778 log.go:172] (0xc000583760) Data frame received for 5\nI0516 21:29:44.934302 778 log.go:172] (0xc000ba0140) (5) Data frame handling\nI0516 21:29:44.934320 778 log.go:172] (0xc000ba0140) (5) Data frame sent\nI0516 21:29:44.934329 778 log.go:172] (0xc000583760) Data frame received for 5\nI0516 21:29:44.934336 778 log.go:172] (0xc000ba0140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0516 21:29:44.934356 778 log.go:172] (0xc000583760) Data frame received for 3\nI0516 21:29:44.934364 778 log.go:172] (0xc000ba00a0) (3) Data frame handling\nI0516 21:29:44.934373 778 log.go:172] (0xc000ba00a0) (3) Data frame sent\nI0516 21:29:44.934381 778 log.go:172] (0xc000583760) Data frame received for 3\nI0516 21:29:44.934389 778 log.go:172] (0xc000ba00a0) (3) Data frame handling\nI0516 21:29:44.935727 778 log.go:172] (0xc000583760) Data frame received for 1\nI0516 21:29:44.935745 778 log.go:172] (0xc000ae0320) (1) Data frame handling\nI0516 21:29:44.935753 778 log.go:172] (0xc000ae0320) (1) Data frame sent\nI0516 21:29:44.935853 778 log.go:172] (0xc000583760) (0xc000ae0320) Stream removed, broadcasting: 1\nI0516 21:29:44.935897 778 log.go:172] (0xc000583760) Go away received\nI0516 21:29:44.936330 778 log.go:172] (0xc000583760) (0xc000ae0320) Stream removed, broadcasting: 1\nI0516 21:29:44.936351 778 log.go:172] (0xc000583760) (0xc000ba00a0) Stream removed, broadcasting: 3\nI0516 21:29:44.936363 778 log.go:172] (0xc000583760) (0xc000ba0140) Stream removed, broadcasting: 5\n" May 16 21:29:44.939: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 21:29:44.939: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 21:29:44.939: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 16 21:30:04.952: INFO: Deleting all statefulset in ns statefulset-1954 May 16 21:30:04.956: INFO: Scaling statefulset ss to 0 May 16 21:30:04.963: INFO: Waiting for statefulset status.replicas updated to 0 May 16 21:30:04.965: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:30:04.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1954" for this suite. • [SLOW TEST:85.434 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":55,"skipped":742,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:30:05.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:30:05.055: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-26f35246-7067-44d4-9330-66aa2555814c" in namespace "security-context-test-6475" to be "success or failure" May 16 21:30:05.069: INFO: Pod "busybox-privileged-false-26f35246-7067-44d4-9330-66aa2555814c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.21994ms May 16 21:30:07.074: INFO: Pod "busybox-privileged-false-26f35246-7067-44d4-9330-66aa2555814c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018845836s May 16 21:30:09.078: INFO: Pod "busybox-privileged-false-26f35246-7067-44d4-9330-66aa2555814c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023220245s May 16 21:30:09.078: INFO: Pod "busybox-privileged-false-26f35246-7067-44d4-9330-66aa2555814c" satisfied condition "success or failure" May 16 21:30:09.096: INFO: Got logs for pod "busybox-privileged-false-26f35246-7067-44d4-9330-66aa2555814c": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:30:09.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6475" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":744,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:30:09.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:30:09.188: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 16 21:30:11.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3371 create -f -' May 16 21:30:15.016: INFO: stderr: "" May 16 21:30:15.016: INFO: stdout: "e2e-test-crd-publish-openapi-3048-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 16 21:30:15.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3371 delete e2e-test-crd-publish-openapi-3048-crds test-cr' May 16 21:30:15.117: INFO: stderr: "" May 16 21:30:15.117: INFO: stdout: "e2e-test-crd-publish-openapi-3048-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 16 21:30:15.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3371 apply -f -' May 16 21:30:15.365: INFO: stderr: "" May 16 21:30:15.365: INFO: stdout: "e2e-test-crd-publish-openapi-3048-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 16 21:30:15.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3371 delete e2e-test-crd-publish-openapi-3048-crds test-cr' May 16 21:30:15.468: INFO: stderr: "" May 16 21:30:15.468: INFO: stdout: "e2e-test-crd-publish-openapi-3048-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 16 21:30:15.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3048-crds' May 16 21:30:15.688: INFO: stderr: "" May 16 21:30:15.688: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3048-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:30:18.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3371" for this suite. • [SLOW TEST:9.511 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":57,"skipped":768,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:30:18.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3109.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3109.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 21:30:26.739: INFO: DNS probes using dns-3109/dns-test-0d002527-5bd8-4119-8628-9f46b675b65b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:30:26.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3109" for this suite. • [SLOW TEST:8.183 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":58,"skipped":789,"failed":0} SS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:30:26.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-5593/configmap-test-993d0407-35c3-4b2c-ac0b-b0bbf7742f3d STEP: Creating a pod to test consume configMaps May 16 21:30:26.913: INFO: Waiting up to 5m0s for pod "pod-configmaps-a67200e2-78f6-4e51-a341-eeb91394046a" in namespace "configmap-5593" to be "success or failure" May 16 21:30:27.322: INFO: Pod "pod-configmaps-a67200e2-78f6-4e51-a341-eeb91394046a": Phase="Pending", Reason="", readiness=false. Elapsed: 409.065268ms May 16 21:30:29.327: INFO: Pod "pod-configmaps-a67200e2-78f6-4e51-a341-eeb91394046a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.413842514s May 16 21:30:31.332: INFO: Pod "pod-configmaps-a67200e2-78f6-4e51-a341-eeb91394046a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.418308944s May 16 21:30:33.346: INFO: Pod "pod-configmaps-a67200e2-78f6-4e51-a341-eeb91394046a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.432799287s STEP: Saw pod success May 16 21:30:33.346: INFO: Pod "pod-configmaps-a67200e2-78f6-4e51-a341-eeb91394046a" satisfied condition "success or failure" May 16 21:30:33.349: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-a67200e2-78f6-4e51-a341-eeb91394046a container env-test: STEP: delete the pod May 16 21:30:33.367: INFO: Waiting for pod pod-configmaps-a67200e2-78f6-4e51-a341-eeb91394046a to disappear May 16 21:30:33.371: INFO: Pod pod-configmaps-a67200e2-78f6-4e51-a341-eeb91394046a no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:30:33.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5593" for this suite. • [SLOW TEST:6.581 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":791,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:30:33.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:30:50.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3496" for this suite. • [SLOW TEST:17.117 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":60,"skipped":810,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:30:50.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 16 21:30:55.151: INFO: Successfully updated pod "labelsupdate8b85ab85-7bd7-4c00-b374-d0b031cbfe45" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:30:59.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1304" for this suite. • [SLOW TEST:8.702 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":828,"failed":0} [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:30:59.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 16 21:31:05.307: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3313 PodName:pod-sharedvolume-30eed8e1-ba85-447c-b1c8-857d8b791d9d ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 21:31:05.307: INFO: >>> kubeConfig: /root/.kube/config I0516 21:31:05.346516 6 log.go:172] (0xc004e644d0) (0xc000d96f00) Create stream I0516 21:31:05.346557 6 log.go:172] (0xc004e644d0) (0xc000d96f00) Stream added, broadcasting: 1 I0516 21:31:05.349343 6 log.go:172] (0xc004e644d0) Reply frame received for 1 I0516 21:31:05.349385 6 log.go:172] (0xc004e644d0) (0xc000d96fa0) Create stream I0516 21:31:05.349397 6 log.go:172] (0xc004e644d0) (0xc000d96fa0) Stream added, broadcasting: 3 I0516 21:31:05.350234 6 log.go:172] (0xc004e644d0) Reply frame received for 3 I0516 21:31:05.350252 6 log.go:172] (0xc004e644d0) (0xc000d97400) Create stream I0516 21:31:05.350258 6 log.go:172] (0xc004e644d0) (0xc000d97400) Stream added, broadcasting: 5 I0516 21:31:05.351058 6 log.go:172] (0xc004e644d0) Reply frame received for 5 I0516 21:31:05.408668 6 log.go:172] (0xc004e644d0) Data frame received for 3 I0516 21:31:05.408698 6 log.go:172] (0xc000d96fa0) (3) Data frame handling I0516 21:31:05.408706 6 log.go:172] (0xc000d96fa0) (3) Data frame sent I0516 21:31:05.408711 6 log.go:172] (0xc004e644d0) Data frame received for 3 I0516 21:31:05.408739 6 log.go:172] (0xc000d96fa0) (3) Data frame handling I0516 21:31:05.408763 6 log.go:172] (0xc004e644d0) Data frame received for 5 I0516 21:31:05.408778 6 log.go:172] (0xc000d97400) (5) Data frame handling I0516 21:31:05.410569 6 log.go:172] (0xc004e644d0) Data frame received for 1 I0516 21:31:05.410621 6 log.go:172] (0xc000d96f00) (1) Data frame handling I0516 21:31:05.410641 6 log.go:172] (0xc000d96f00) (1) Data frame sent I0516 21:31:05.410675 6 log.go:172] (0xc004e644d0) (0xc000d96f00) Stream removed, broadcasting: 1 I0516 21:31:05.410707 6 log.go:172] (0xc004e644d0) Go away received I0516 21:31:05.411217 6 log.go:172] (0xc004e644d0) (0xc000d96f00) Stream removed, broadcasting: 1 I0516 21:31:05.411237 6 log.go:172] (0xc004e644d0) (0xc000d96fa0) Stream removed, broadcasting: 3 I0516 21:31:05.411247 6 log.go:172] (0xc004e644d0) (0xc000d97400) Stream removed, broadcasting: 5 May 16 21:31:05.411: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:31:05.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3313" for this suite. • [SLOW TEST:6.221 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":62,"skipped":828,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:31:05.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:31:05.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 16 21:31:05.695: INFO: stderr: "" May 16 21:31:05.695: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:23:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:31:05.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9884" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":63,"skipped":834,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:31:05.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:31:16.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4059" for this suite. • [SLOW TEST:11.144 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":64,"skipped":875,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:31:16.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 16 21:31:16.956: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8f785d4-162a-4149-93bf-72e382edb0ac" in namespace "projected-8540" to be "success or failure" May 16 21:31:16.959: INFO: Pod "downwardapi-volume-b8f785d4-162a-4149-93bf-72e382edb0ac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.000395ms May 16 21:31:18.963: INFO: Pod "downwardapi-volume-b8f785d4-162a-4149-93bf-72e382edb0ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00746148s May 16 21:31:20.967: INFO: Pod "downwardapi-volume-b8f785d4-162a-4149-93bf-72e382edb0ac": Phase="Running", Reason="", readiness=true. Elapsed: 4.011670535s May 16 21:31:22.972: INFO: Pod "downwardapi-volume-b8f785d4-162a-4149-93bf-72e382edb0ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015736571s STEP: Saw pod success May 16 21:31:22.972: INFO: Pod "downwardapi-volume-b8f785d4-162a-4149-93bf-72e382edb0ac" satisfied condition "success or failure" May 16 21:31:22.974: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b8f785d4-162a-4149-93bf-72e382edb0ac container client-container: STEP: delete the pod May 16 21:31:22.997: INFO: Waiting for pod downwardapi-volume-b8f785d4-162a-4149-93bf-72e382edb0ac to disappear May 16 21:31:23.001: INFO: Pod downwardapi-volume-b8f785d4-162a-4149-93bf-72e382edb0ac no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:31:23.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8540" for this suite. • [SLOW TEST:6.159 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":878,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:31:23.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 16 21:31:23.099: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:31:37.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9513" for this suite. • [SLOW TEST:14.919 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":66,"skipped":893,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:31:37.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 16 21:31:42.569: INFO: Successfully updated pod "annotationupdateaa3d0850-484e-4248-8c2c-eb4e8a01ae0f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:31:46.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2594" for this suite. • [SLOW TEST:8.674 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":923,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:31:46.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 16 21:31:46.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-5951 -- logs-generator --log-lines-total 100 --run-duration 20s' May 16 21:31:46.811: INFO: stderr: "" May 16 21:31:46.811: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 16 21:31:46.811: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 16 21:31:46.811: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5951" to be "running and ready, or succeeded" May 16 21:31:46.814: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.832513ms May 16 21:31:48.819: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007164732s May 16 21:31:50.823: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.011371601s May 16 21:31:50.823: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 16 21:31:50.823: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 16 21:31:50.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5951' May 16 21:31:50.942: INFO: stderr: "" May 16 21:31:50.942: INFO: stdout: "I0516 21:31:49.627911 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/7dpc 545\nI0516 21:31:49.828047 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/bk5 531\nI0516 21:31:50.028116 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/hhcd 349\nI0516 21:31:50.228078 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/xqbx 314\nI0516 21:31:50.428128 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/mxv6 253\nI0516 21:31:50.628100 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/zwbw 271\nI0516 21:31:50.828116 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/g7d 200\n" STEP: limiting log lines May 16 21:31:50.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5951 --tail=1' May 16 21:31:51.031: INFO: stderr: "" May 16 21:31:51.031: INFO: stdout: "I0516 21:31:50.828116 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/g7d 200\n" May 16 21:31:51.031: INFO: got output "I0516 21:31:50.828116 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/g7d 200\n" STEP: limiting log bytes May 16 21:31:51.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5951 --limit-bytes=1' May 16 21:31:51.134: INFO: stderr: "" May 16 21:31:51.134: INFO: stdout: "I" May 16 21:31:51.134: INFO: got output "I" STEP: exposing timestamps May 16 21:31:51.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5951 --tail=1 --timestamps' May 16 21:31:51.232: INFO: stderr: "" May 16 21:31:51.232: INFO: stdout: "2020-05-16T21:31:51.028238251Z I0516 21:31:51.028128 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/gnx5 474\n" May 16 21:31:51.232: INFO: got output "2020-05-16T21:31:51.028238251Z I0516 21:31:51.028128 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/gnx5 474\n" STEP: restricting to a time range May 16 21:31:53.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5951 --since=1s' May 16 21:31:53.845: INFO: stderr: "" May 16 21:31:53.845: INFO: stdout: "I0516 21:31:53.028103 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/nc5t 313\nI0516 21:31:53.228105 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/b7r5 212\nI0516 21:31:53.428134 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/ckj 275\nI0516 21:31:53.628115 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/8vb 247\nI0516 21:31:53.828108 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/2ph9 228\n" May 16 21:31:53.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5951 --since=24h' May 16 21:31:53.969: INFO: stderr: "" May 16 21:31:53.969: INFO: stdout: "I0516 21:31:49.627911 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/7dpc 545\nI0516 21:31:49.828047 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/bk5 531\nI0516 21:31:50.028116 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/hhcd 349\nI0516 21:31:50.228078 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/xqbx 314\nI0516 21:31:50.428128 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/mxv6 253\nI0516 21:31:50.628100 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/zwbw 271\nI0516 21:31:50.828116 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/g7d 200\nI0516 21:31:51.028128 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/gnx5 474\nI0516 21:31:51.228060 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/mw4 454\nI0516 21:31:51.428124 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/d4n 351\nI0516 21:31:51.628094 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/nz7 202\nI0516 21:31:51.828073 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/bjqg 562\nI0516 21:31:52.028095 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/sk8 409\nI0516 21:31:52.228119 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/f2k 343\nI0516 21:31:52.428129 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/qxmp 528\nI0516 21:31:52.628103 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/stb 465\nI0516 21:31:52.828062 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/t2bz 281\nI0516 21:31:53.028103 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/nc5t 313\nI0516 21:31:53.228105 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/b7r5 212\nI0516 21:31:53.428134 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/ckj 275\nI0516 21:31:53.628115 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/8vb 247\nI0516 21:31:53.828108 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/2ph9 228\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 16 21:31:53.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5951' May 16 21:31:59.512: INFO: stderr: "" May 16 21:31:59.512: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:31:59.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5951" for this suite. • [SLOW TEST:12.914 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":68,"skipped":950,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:31:59.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:31:59.650: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:32:00.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5772" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":69,"skipped":956,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:32:00.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 16 21:32:00.806: INFO: Waiting up to 5m0s for pod "pod-cc70f9c1-02bf-42f3-be64-dcff13883a93" in namespace "emptydir-329" to be "success or failure" May 16 21:32:00.811: INFO: Pod "pod-cc70f9c1-02bf-42f3-be64-dcff13883a93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475233ms May 16 21:32:02.815: INFO: Pod "pod-cc70f9c1-02bf-42f3-be64-dcff13883a93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008482333s May 16 21:32:04.818: INFO: Pod "pod-cc70f9c1-02bf-42f3-be64-dcff13883a93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011290698s STEP: Saw pod success May 16 21:32:04.818: INFO: Pod "pod-cc70f9c1-02bf-42f3-be64-dcff13883a93" satisfied condition "success or failure" May 16 21:32:04.822: INFO: Trying to get logs from node jerma-worker2 pod pod-cc70f9c1-02bf-42f3-be64-dcff13883a93 container test-container: STEP: delete the pod May 16 21:32:04.853: INFO: Waiting for pod pod-cc70f9c1-02bf-42f3-be64-dcff13883a93 to disappear May 16 21:32:04.858: INFO: Pod pod-cc70f9c1-02bf-42f3-be64-dcff13883a93 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:32:04.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-329" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":957,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:32:04.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2085 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-2085 May 16 21:32:04.929: INFO: Found 0 stateful pods, waiting for 1 May 16 21:32:14.932: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 16 21:32:14.954: INFO: Deleting all statefulset in ns statefulset-2085 May 16 21:32:14.976: INFO: Scaling statefulset ss to 0 May 16 21:32:35.042: INFO: Waiting for statefulset status.replicas updated to 0 May 16 21:32:35.044: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:32:35.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2085" for this suite. • [SLOW TEST:30.196 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":71,"skipped":958,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:32:35.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 16 21:32:35.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-3864' May 16 21:32:35.281: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 16 21:32:35.282: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 16 21:32:37.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3864' May 16 21:32:37.480: INFO: stderr: "" May 16 21:32:37.480: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:32:37.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3864" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":72,"skipped":966,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:32:37.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-7b04e358-999f-4a99-a009-cc343af8e06a STEP: Creating a pod to test consume configMaps May 16 21:32:37.743: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4a7fff8-c610-4c21-8972-627e2d0edc75" in namespace "configmap-5039" to be "success or failure" May 16 21:32:37.793: INFO: Pod "pod-configmaps-f4a7fff8-c610-4c21-8972-627e2d0edc75": Phase="Pending", Reason="", readiness=false. Elapsed: 50.345106ms May 16 21:32:39.839: INFO: Pod "pod-configmaps-f4a7fff8-c610-4c21-8972-627e2d0edc75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096215649s May 16 21:32:41.865: INFO: Pod "pod-configmaps-f4a7fff8-c610-4c21-8972-627e2d0edc75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121952199s STEP: Saw pod success May 16 21:32:41.865: INFO: Pod "pod-configmaps-f4a7fff8-c610-4c21-8972-627e2d0edc75" satisfied condition "success or failure" May 16 21:32:41.877: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-f4a7fff8-c610-4c21-8972-627e2d0edc75 container configmap-volume-test: STEP: delete the pod May 16 21:32:41.950: INFO: Waiting for pod pod-configmaps-f4a7fff8-c610-4c21-8972-627e2d0edc75 to disappear May 16 21:32:41.988: INFO: Pod pod-configmaps-f4a7fff8-c610-4c21-8972-627e2d0edc75 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:32:41.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5039" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":977,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:32:41.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-9758c80f-282a-409a-a67e-c0dc3b2d4d29 STEP: Creating a pod to test consume secrets May 16 21:32:42.076: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ee8862d8-8d62-4b68-bd62-b622276a5568" in namespace "projected-6035" to be "success or failure" May 16 21:32:42.080: INFO: Pod "pod-projected-secrets-ee8862d8-8d62-4b68-bd62-b622276a5568": Phase="Pending", Reason="", readiness=false. Elapsed: 3.670875ms May 16 21:32:44.130: INFO: Pod "pod-projected-secrets-ee8862d8-8d62-4b68-bd62-b622276a5568": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05299849s May 16 21:32:46.132: INFO: Pod "pod-projected-secrets-ee8862d8-8d62-4b68-bd62-b622276a5568": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055765413s STEP: Saw pod success May 16 21:32:46.132: INFO: Pod "pod-projected-secrets-ee8862d8-8d62-4b68-bd62-b622276a5568" satisfied condition "success or failure" May 16 21:32:46.135: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-ee8862d8-8d62-4b68-bd62-b622276a5568 container projected-secret-volume-test: STEP: delete the pod May 16 21:32:46.169: INFO: Waiting for pod pod-projected-secrets-ee8862d8-8d62-4b68-bd62-b622276a5568 to disappear May 16 21:32:46.183: INFO: Pod pod-projected-secrets-ee8862d8-8d62-4b68-bd62-b622276a5568 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:32:46.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6035" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":982,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:32:46.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 16 21:32:46.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 16 21:32:46.374: INFO: stderr: "" May 16 21:32:46.374: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:32:46.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7653" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":75,"skipped":991,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:32:46.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-4c061f2c-a316-409e-a512-2021167034b5 STEP: Creating a pod to test consume secrets May 16 21:32:46.481: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ec6460f9-f30b-450d-8bf1-44bf48924dc5" in namespace "projected-8912" to be "success or failure" May 16 21:32:46.488: INFO: Pod "pod-projected-secrets-ec6460f9-f30b-450d-8bf1-44bf48924dc5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.19312ms May 16 21:32:48.493: INFO: Pod "pod-projected-secrets-ec6460f9-f30b-450d-8bf1-44bf48924dc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011456551s May 16 21:32:50.499: INFO: Pod "pod-projected-secrets-ec6460f9-f30b-450d-8bf1-44bf48924dc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017354203s STEP: Saw pod success May 16 21:32:50.499: INFO: Pod "pod-projected-secrets-ec6460f9-f30b-450d-8bf1-44bf48924dc5" satisfied condition "success or failure" May 16 21:32:50.502: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-ec6460f9-f30b-450d-8bf1-44bf48924dc5 container projected-secret-volume-test: STEP: delete the pod May 16 21:32:50.536: INFO: Waiting for pod pod-projected-secrets-ec6460f9-f30b-450d-8bf1-44bf48924dc5 to disappear May 16 21:32:50.593: INFO: Pod pod-projected-secrets-ec6460f9-f30b-450d-8bf1-44bf48924dc5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:32:50.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8912" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":993,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:32:50.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:32:50.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5614" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":77,"skipped":995,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:32:50.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-8241c9fa-6cd3-4f74-b447-c39df1826682 STEP: Creating a pod to test consume configMaps May 16 21:32:50.904: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f42643bd-b12a-432a-968c-172585aedf56" in namespace "projected-9926" to be "success or failure" May 16 21:32:50.907: INFO: Pod "pod-projected-configmaps-f42643bd-b12a-432a-968c-172585aedf56": Phase="Pending", Reason="", readiness=false. Elapsed: 3.075063ms May 16 21:32:52.965: INFO: Pod "pod-projected-configmaps-f42643bd-b12a-432a-968c-172585aedf56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061327549s May 16 21:32:54.970: INFO: Pod "pod-projected-configmaps-f42643bd-b12a-432a-968c-172585aedf56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06618833s STEP: Saw pod success May 16 21:32:54.970: INFO: Pod "pod-projected-configmaps-f42643bd-b12a-432a-968c-172585aedf56" satisfied condition "success or failure" May 16 21:32:54.972: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-f42643bd-b12a-432a-968c-172585aedf56 container projected-configmap-volume-test: STEP: delete the pod May 16 21:32:54.992: INFO: Waiting for pod pod-projected-configmaps-f42643bd-b12a-432a-968c-172585aedf56 to disappear May 16 21:32:55.048: INFO: Pod pod-projected-configmaps-f42643bd-b12a-432a-968c-172585aedf56 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:32:55.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9926" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1009,"failed":0} ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:32:55.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:32:59.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-197" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":79,"skipped":1009,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:32:59.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-e9a53039-74e8-4dca-a854-440e0eb8921d STEP: Creating configMap with name cm-test-opt-upd-9163e6bc-3238-4a33-a939-e96a89c03f62 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-e9a53039-74e8-4dca-a854-440e0eb8921d STEP: Updating configmap cm-test-opt-upd-9163e6bc-3238-4a33-a939-e96a89c03f62 STEP: Creating configMap with name cm-test-opt-create-c5f4276a-14d0-414b-87a6-36af054ae963 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:33:09.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4525" for this suite. • [SLOW TEST:10.474 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1014,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:33:09.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 16 21:33:09.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2276' May 16 21:33:10.154: INFO: stderr: "" May 16 21:33:10.154: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 16 21:33:10.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2276' May 16 21:33:10.275: INFO: stderr: "" May 16 21:33:10.275: INFO: stdout: "update-demo-nautilus-9ddll update-demo-nautilus-fznrt " May 16 21:33:10.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ddll -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2276' May 16 21:33:10.390: INFO: stderr: "" May 16 21:33:10.390: INFO: stdout: "" May 16 21:33:10.390: INFO: update-demo-nautilus-9ddll is created but not running May 16 21:33:15.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2276' May 16 21:33:15.492: INFO: stderr: "" May 16 21:33:15.492: INFO: stdout: "update-demo-nautilus-9ddll update-demo-nautilus-fznrt " May 16 21:33:15.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ddll -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2276' May 16 21:33:15.840: INFO: stderr: "" May 16 21:33:15.840: INFO: stdout: "true" May 16 21:33:15.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ddll -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2276' May 16 21:33:15.927: INFO: stderr: "" May 16 21:33:15.927: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 21:33:15.927: INFO: validating pod update-demo-nautilus-9ddll May 16 21:33:15.939: INFO: got data: { "image": "nautilus.jpg" } May 16 21:33:15.939: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 21:33:15.939: INFO: update-demo-nautilus-9ddll is verified up and running May 16 21:33:15.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fznrt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2276' May 16 21:33:16.087: INFO: stderr: "" May 16 21:33:16.087: INFO: stdout: "true" May 16 21:33:16.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fznrt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2276' May 16 21:33:16.212: INFO: stderr: "" May 16 21:33:16.212: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 21:33:16.212: INFO: validating pod update-demo-nautilus-fznrt May 16 21:33:16.240: INFO: got data: { "image": "nautilus.jpg" } May 16 21:33:16.240: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 21:33:16.240: INFO: update-demo-nautilus-fznrt is verified up and running STEP: rolling-update to new replication controller May 16 21:33:16.243: INFO: scanned /root for discovery docs: May 16 21:33:16.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-2276' May 16 21:33:39.826: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 16 21:33:39.826: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 16 21:33:39.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2276' May 16 21:33:39.922: INFO: stderr: "" May 16 21:33:39.922: INFO: stdout: "update-demo-kitten-5xjfk update-demo-kitten-k2qhb " May 16 21:33:39.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5xjfk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2276' May 16 21:33:40.013: INFO: stderr: "" May 16 21:33:40.013: INFO: stdout: "true" May 16 21:33:40.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5xjfk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2276' May 16 21:33:40.115: INFO: stderr: "" May 16 21:33:40.115: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 16 21:33:40.115: INFO: validating pod update-demo-kitten-5xjfk May 16 21:33:40.125: INFO: got data: { "image": "kitten.jpg" } May 16 21:33:40.125: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 16 21:33:40.125: INFO: update-demo-kitten-5xjfk is verified up and running May 16 21:33:40.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-k2qhb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2276' May 16 21:33:40.227: INFO: stderr: "" May 16 21:33:40.227: INFO: stdout: "true" May 16 21:33:40.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-k2qhb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2276' May 16 21:33:40.335: INFO: stderr: "" May 16 21:33:40.335: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 16 21:33:40.335: INFO: validating pod update-demo-kitten-k2qhb May 16 21:33:40.339: INFO: got data: { "image": "kitten.jpg" } May 16 21:33:40.339: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 16 21:33:40.339: INFO: update-demo-kitten-k2qhb is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:33:40.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2276" for this suite. • [SLOW TEST:30.625 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":81,"skipped":1019,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:33:40.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 16 21:33:40.435: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:33:46.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-323" for this suite. • [SLOW TEST:5.833 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":82,"skipped":1036,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:33:46.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:34:03.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7434" for this suite. • [SLOW TEST:17.115 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":83,"skipped":1046,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:34:03.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 16 21:34:03.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 16 21:34:03.600: INFO: stderr: "" May 16 21:34:03.600: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:34:03.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5026" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":84,"skipped":1058,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:34:03.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 16 21:34:03.825: INFO: Pod name pod-release: Found 0 pods out of 1 May 16 21:34:08.828: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:34:09.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7753" for this suite. • [SLOW TEST:6.219 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":85,"skipped":1068,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:34:09.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-62192c1c-5ef1-4ff3-9ff0-a5af725f3e08 in namespace container-probe-5685 May 16 21:34:13.974: INFO: Started pod test-webserver-62192c1c-5ef1-4ff3-9ff0-a5af725f3e08 in namespace container-probe-5685 STEP: checking the pod's current state and verifying that restartCount is present May 16 21:34:13.977: INFO: Initial restart count of pod test-webserver-62192c1c-5ef1-4ff3-9ff0-a5af725f3e08 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:38:14.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5685" for this suite. • [SLOW TEST:244.945 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1091,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:38:14.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 16 21:38:14.848: INFO: Waiting up to 5m0s for pod "pod-c9ee7226-0462-464f-a57d-76c8546409b3" in namespace "emptydir-5043" to be "success or failure" May 16 21:38:14.864: INFO: Pod "pod-c9ee7226-0462-464f-a57d-76c8546409b3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.347142ms May 16 21:38:16.868: INFO: Pod "pod-c9ee7226-0462-464f-a57d-76c8546409b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020009235s May 16 21:38:18.873: INFO: Pod "pod-c9ee7226-0462-464f-a57d-76c8546409b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02488523s STEP: Saw pod success May 16 21:38:18.873: INFO: Pod "pod-c9ee7226-0462-464f-a57d-76c8546409b3" satisfied condition "success or failure" May 16 21:38:18.875: INFO: Trying to get logs from node jerma-worker2 pod pod-c9ee7226-0462-464f-a57d-76c8546409b3 container test-container: STEP: delete the pod May 16 21:38:18.999: INFO: Waiting for pod pod-c9ee7226-0462-464f-a57d-76c8546409b3 to disappear May 16 21:38:19.014: INFO: Pod pod-c9ee7226-0462-464f-a57d-76c8546409b3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:38:19.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5043" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1099,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:38:19.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 16 21:38:29.197: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9560 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 21:38:29.197: INFO: >>> kubeConfig: /root/.kube/config I0516 21:38:29.229711 6 log.go:172] (0xc002c16580) (0xc000e8e000) Create stream I0516 21:38:29.229741 6 log.go:172] (0xc002c16580) (0xc000e8e000) Stream added, broadcasting: 1 I0516 21:38:29.231451 6 log.go:172] (0xc002c16580) Reply frame received for 1 I0516 21:38:29.231486 6 log.go:172] (0xc002c16580) (0xc000e8ed20) Create stream I0516 21:38:29.231499 6 log.go:172] (0xc002c16580) (0xc000e8ed20) Stream added, broadcasting: 3 I0516 21:38:29.232355 6 log.go:172] (0xc002c16580) Reply frame received for 3 I0516 21:38:29.232390 6 log.go:172] (0xc002c16580) (0xc0004f7400) Create stream I0516 21:38:29.232402 6 log.go:172] (0xc002c16580) (0xc0004f7400) Stream added, broadcasting: 5 I0516 21:38:29.233481 6 log.go:172] (0xc002c16580) Reply frame received for 5 I0516 21:38:29.318872 6 log.go:172] (0xc002c16580) Data frame received for 5 I0516 21:38:29.318900 6 log.go:172] (0xc0004f7400) (5) Data frame handling I0516 21:38:29.318927 6 log.go:172] (0xc002c16580) Data frame received for 3 I0516 21:38:29.318956 6 log.go:172] (0xc000e8ed20) (3) Data frame handling I0516 21:38:29.318975 6 log.go:172] (0xc000e8ed20) (3) Data frame sent I0516 21:38:29.318985 6 log.go:172] (0xc002c16580) Data frame received for 3 I0516 21:38:29.318994 6 log.go:172] (0xc000e8ed20) (3) Data frame handling I0516 21:38:29.320774 6 log.go:172] (0xc002c16580) Data frame received for 1 I0516 21:38:29.320841 6 log.go:172] (0xc000e8e000) (1) Data frame handling I0516 21:38:29.320858 6 log.go:172] (0xc000e8e000) (1) Data frame sent I0516 21:38:29.320870 6 log.go:172] (0xc002c16580) (0xc000e8e000) Stream removed, broadcasting: 1 I0516 21:38:29.321004 6 log.go:172] (0xc002c16580) (0xc000e8e000) Stream removed, broadcasting: 1 I0516 21:38:29.321024 6 log.go:172] (0xc002c16580) (0xc000e8ed20) Stream removed, broadcasting: 3 I0516 21:38:29.321034 6 log.go:172] (0xc002c16580) (0xc0004f7400) Stream removed, broadcasting: 5 May 16 21:38:29.321: INFO: Exec stderr: "" May 16 21:38:29.321: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9560 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 21:38:29.321: INFO: >>> kubeConfig: /root/.kube/config I0516 21:38:29.321260 6 log.go:172] (0xc002c16580) Go away received I0516 21:38:29.354636 6 log.go:172] (0xc002c169a0) (0xc000e8ee60) Create stream I0516 21:38:29.354672 6 log.go:172] (0xc002c169a0) (0xc000e8ee60) Stream added, broadcasting: 1 I0516 21:38:29.356551 6 log.go:172] (0xc002c169a0) Reply frame received for 1 I0516 21:38:29.356600 6 log.go:172] (0xc002c169a0) (0xc000e8ef00) Create stream I0516 21:38:29.356616 6 log.go:172] (0xc002c169a0) (0xc000e8ef00) Stream added, broadcasting: 3 I0516 21:38:29.357922 6 log.go:172] (0xc002c169a0) Reply frame received for 3 I0516 21:38:29.357964 6 log.go:172] (0xc002c169a0) (0xc000aaa6e0) Create stream I0516 21:38:29.357978 6 log.go:172] (0xc002c169a0) (0xc000aaa6e0) Stream added, broadcasting: 5 I0516 21:38:29.358956 6 log.go:172] (0xc002c169a0) Reply frame received for 5 I0516 21:38:29.433693 6 log.go:172] (0xc002c169a0) Data frame received for 3 I0516 21:38:29.433737 6 log.go:172] (0xc000e8ef00) (3) Data frame handling I0516 21:38:29.433751 6 log.go:172] (0xc000e8ef00) (3) Data frame sent I0516 21:38:29.433765 6 log.go:172] (0xc002c169a0) Data frame received for 3 I0516 21:38:29.433779 6 log.go:172] (0xc000e8ef00) (3) Data frame handling I0516 21:38:29.433834 6 log.go:172] (0xc002c169a0) Data frame received for 5 I0516 21:38:29.433878 6 log.go:172] (0xc000aaa6e0) (5) Data frame handling I0516 21:38:29.435499 6 log.go:172] (0xc002c169a0) Data frame received for 1 I0516 21:38:29.435533 6 log.go:172] (0xc000e8ee60) (1) Data frame handling I0516 21:38:29.435561 6 log.go:172] (0xc000e8ee60) (1) Data frame sent I0516 21:38:29.435593 6 log.go:172] (0xc002c169a0) (0xc000e8ee60) Stream removed, broadcasting: 1 I0516 21:38:29.435732 6 log.go:172] (0xc002c169a0) (0xc000e8ee60) Stream removed, broadcasting: 1 I0516 21:38:29.435768 6 log.go:172] (0xc002c169a0) (0xc000e8ef00) Stream removed, broadcasting: 3 I0516 21:38:29.435788 6 log.go:172] (0xc002c169a0) (0xc000aaa6e0) Stream removed, broadcasting: 5 May 16 21:38:29.435: INFO: Exec stderr: "" I0516 21:38:29.435823 6 log.go:172] (0xc002c169a0) Go away received May 16 21:38:29.435: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9560 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 21:38:29.435: INFO: >>> kubeConfig: /root/.kube/config I0516 21:38:29.466655 6 log.go:172] (0xc002ca4210) (0xc001296aa0) Create stream I0516 21:38:29.466695 6 log.go:172] (0xc002ca4210) (0xc001296aa0) Stream added, broadcasting: 1 I0516 21:38:29.469880 6 log.go:172] (0xc002ca4210) Reply frame received for 1 I0516 21:38:29.469926 6 log.go:172] (0xc002ca4210) (0xc000aaaaa0) Create stream I0516 21:38:29.469942 6 log.go:172] (0xc002ca4210) (0xc000aaaaa0) Stream added, broadcasting: 3 I0516 21:38:29.471380 6 log.go:172] (0xc002ca4210) Reply frame received for 3 I0516 21:38:29.471410 6 log.go:172] (0xc002ca4210) (0xc001296b40) Create stream I0516 21:38:29.471419 6 log.go:172] (0xc002ca4210) (0xc001296b40) Stream added, broadcasting: 5 I0516 21:38:29.472284 6 log.go:172] (0xc002ca4210) Reply frame received for 5 I0516 21:38:29.538021 6 log.go:172] (0xc002ca4210) Data frame received for 3 I0516 21:38:29.538057 6 log.go:172] (0xc000aaaaa0) (3) Data frame handling I0516 21:38:29.538084 6 log.go:172] (0xc002ca4210) Data frame received for 5 I0516 21:38:29.538145 6 log.go:172] (0xc001296b40) (5) Data frame handling I0516 21:38:29.538182 6 log.go:172] (0xc000aaaaa0) (3) Data frame sent I0516 21:38:29.538205 6 log.go:172] (0xc002ca4210) Data frame received for 3 I0516 21:38:29.538226 6 log.go:172] (0xc000aaaaa0) (3) Data frame handling I0516 21:38:29.539785 6 log.go:172] (0xc002ca4210) Data frame received for 1 I0516 21:38:29.539821 6 log.go:172] (0xc001296aa0) (1) Data frame handling I0516 21:38:29.539847 6 log.go:172] (0xc001296aa0) (1) Data frame sent I0516 21:38:29.539882 6 log.go:172] (0xc002ca4210) (0xc001296aa0) Stream removed, broadcasting: 1 I0516 21:38:29.539906 6 log.go:172] (0xc002ca4210) Go away received I0516 21:38:29.539986 6 log.go:172] (0xc002ca4210) (0xc001296aa0) Stream removed, broadcasting: 1 I0516 21:38:29.540008 6 log.go:172] (0xc002ca4210) (0xc000aaaaa0) Stream removed, broadcasting: 3 I0516 21:38:29.540015 6 log.go:172] (0xc002ca4210) (0xc001296b40) Stream removed, broadcasting: 5 May 16 21:38:29.540: INFO: Exec stderr: "" May 16 21:38:29.540: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9560 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 21:38:29.540: INFO: >>> kubeConfig: /root/.kube/config I0516 21:38:29.572504 6 log.go:172] (0xc0025fe4d0) (0xc0010ab720) Create stream I0516 21:38:29.572536 6 log.go:172] (0xc0025fe4d0) (0xc0010ab720) Stream added, broadcasting: 1 I0516 21:38:29.574567 6 log.go:172] (0xc0025fe4d0) Reply frame received for 1 I0516 21:38:29.574603 6 log.go:172] (0xc0025fe4d0) (0xc000aaac80) Create stream I0516 21:38:29.574613 6 log.go:172] (0xc0025fe4d0) (0xc000aaac80) Stream added, broadcasting: 3 I0516 21:38:29.575439 6 log.go:172] (0xc0025fe4d0) Reply frame received for 3 I0516 21:38:29.575477 6 log.go:172] (0xc0025fe4d0) (0xc000e8f040) Create stream I0516 21:38:29.575491 6 log.go:172] (0xc0025fe4d0) (0xc000e8f040) Stream added, broadcasting: 5 I0516 21:38:29.576322 6 log.go:172] (0xc0025fe4d0) Reply frame received for 5 I0516 21:38:29.641886 6 log.go:172] (0xc0025fe4d0) Data frame received for 3 I0516 21:38:29.641927 6 log.go:172] (0xc000aaac80) (3) Data frame handling I0516 21:38:29.641938 6 log.go:172] (0xc000aaac80) (3) Data frame sent I0516 21:38:29.641960 6 log.go:172] (0xc0025fe4d0) Data frame received for 5 I0516 21:38:29.641999 6 log.go:172] (0xc000e8f040) (5) Data frame handling I0516 21:38:29.642018 6 log.go:172] (0xc0025fe4d0) Data frame received for 3 I0516 21:38:29.642033 6 log.go:172] (0xc000aaac80) (3) Data frame handling I0516 21:38:29.643029 6 log.go:172] (0xc0025fe4d0) Data frame received for 1 I0516 21:38:29.643041 6 log.go:172] (0xc0010ab720) (1) Data frame handling I0516 21:38:29.643057 6 log.go:172] (0xc0010ab720) (1) Data frame sent I0516 21:38:29.643074 6 log.go:172] (0xc0025fe4d0) (0xc0010ab720) Stream removed, broadcasting: 1 I0516 21:38:29.643089 6 log.go:172] (0xc0025fe4d0) Go away received I0516 21:38:29.643245 6 log.go:172] (0xc0025fe4d0) (0xc0010ab720) Stream removed, broadcasting: 1 I0516 21:38:29.643264 6 log.go:172] (0xc0025fe4d0) (0xc000aaac80) Stream removed, broadcasting: 3 I0516 21:38:29.643280 6 log.go:172] (0xc0025fe4d0) (0xc000e8f040) Stream removed, broadcasting: 5 May 16 21:38:29.643: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 16 21:38:29.643: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9560 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 21:38:29.643: INFO: >>> kubeConfig: /root/.kube/config I0516 21:38:29.672323 6 log.go:172] (0xc002c17130) (0xc000e8f4a0) Create stream I0516 21:38:29.672355 6 log.go:172] (0xc002c17130) (0xc000e8f4a0) Stream added, broadcasting: 1 I0516 21:38:29.674904 6 log.go:172] (0xc002c17130) Reply frame received for 1 I0516 21:38:29.674971 6 log.go:172] (0xc002c17130) (0xc000aaad20) Create stream I0516 21:38:29.674996 6 log.go:172] (0xc002c17130) (0xc000aaad20) Stream added, broadcasting: 3 I0516 21:38:29.676102 6 log.go:172] (0xc002c17130) Reply frame received for 3 I0516 21:38:29.676139 6 log.go:172] (0xc002c17130) (0xc0010ab9a0) Create stream I0516 21:38:29.676154 6 log.go:172] (0xc002c17130) (0xc0010ab9a0) Stream added, broadcasting: 5 I0516 21:38:29.677097 6 log.go:172] (0xc002c17130) Reply frame received for 5 I0516 21:38:29.736040 6 log.go:172] (0xc002c17130) Data frame received for 5 I0516 21:38:29.736087 6 log.go:172] (0xc0010ab9a0) (5) Data frame handling I0516 21:38:29.736115 6 log.go:172] (0xc002c17130) Data frame received for 3 I0516 21:38:29.736137 6 log.go:172] (0xc000aaad20) (3) Data frame handling I0516 21:38:29.736157 6 log.go:172] (0xc000aaad20) (3) Data frame sent I0516 21:38:29.736173 6 log.go:172] (0xc002c17130) Data frame received for 3 I0516 21:38:29.736187 6 log.go:172] (0xc000aaad20) (3) Data frame handling I0516 21:38:29.737348 6 log.go:172] (0xc002c17130) Data frame received for 1 I0516 21:38:29.737364 6 log.go:172] (0xc000e8f4a0) (1) Data frame handling I0516 21:38:29.737378 6 log.go:172] (0xc000e8f4a0) (1) Data frame sent I0516 21:38:29.737396 6 log.go:172] (0xc002c17130) (0xc000e8f4a0) Stream removed, broadcasting: 1 I0516 21:38:29.737500 6 log.go:172] (0xc002c17130) Go away received I0516 21:38:29.737557 6 log.go:172] (0xc002c17130) (0xc000e8f4a0) Stream removed, broadcasting: 1 I0516 21:38:29.737587 6 log.go:172] (0xc002c17130) (0xc000aaad20) Stream removed, broadcasting: 3 I0516 21:38:29.737597 6 log.go:172] (0xc002c17130) (0xc0010ab9a0) Stream removed, broadcasting: 5 May 16 21:38:29.737: INFO: Exec stderr: "" May 16 21:38:29.737: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9560 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 21:38:29.737: INFO: >>> kubeConfig: /root/.kube/config I0516 21:38:29.766619 6 log.go:172] (0xc001482000) (0xc000c78aa0) Create stream I0516 21:38:29.766645 6 log.go:172] (0xc001482000) (0xc000c78aa0) Stream added, broadcasting: 1 I0516 21:38:29.768461 6 log.go:172] (0xc001482000) Reply frame received for 1 I0516 21:38:29.768501 6 log.go:172] (0xc001482000) (0xc000aab9a0) Create stream I0516 21:38:29.768521 6 log.go:172] (0xc001482000) (0xc000aab9a0) Stream added, broadcasting: 3 I0516 21:38:29.769877 6 log.go:172] (0xc001482000) Reply frame received for 3 I0516 21:38:29.769925 6 log.go:172] (0xc001482000) (0xc000c790e0) Create stream I0516 21:38:29.769940 6 log.go:172] (0xc001482000) (0xc000c790e0) Stream added, broadcasting: 5 I0516 21:38:29.770844 6 log.go:172] (0xc001482000) Reply frame received for 5 I0516 21:38:29.839829 6 log.go:172] (0xc001482000) Data frame received for 5 I0516 21:38:29.839883 6 log.go:172] (0xc000c790e0) (5) Data frame handling I0516 21:38:29.839916 6 log.go:172] (0xc001482000) Data frame received for 3 I0516 21:38:29.839935 6 log.go:172] (0xc000aab9a0) (3) Data frame handling I0516 21:38:29.839956 6 log.go:172] (0xc000aab9a0) (3) Data frame sent I0516 21:38:29.839972 6 log.go:172] (0xc001482000) Data frame received for 3 I0516 21:38:29.839988 6 log.go:172] (0xc000aab9a0) (3) Data frame handling I0516 21:38:29.841473 6 log.go:172] (0xc001482000) Data frame received for 1 I0516 21:38:29.841500 6 log.go:172] (0xc000c78aa0) (1) Data frame handling I0516 21:38:29.841513 6 log.go:172] (0xc000c78aa0) (1) Data frame sent I0516 21:38:29.841527 6 log.go:172] (0xc001482000) (0xc000c78aa0) Stream removed, broadcasting: 1 I0516 21:38:29.841649 6 log.go:172] (0xc001482000) (0xc000c78aa0) Stream removed, broadcasting: 1 I0516 21:38:29.841673 6 log.go:172] (0xc001482000) (0xc000aab9a0) Stream removed, broadcasting: 3 I0516 21:38:29.841685 6 log.go:172] (0xc001482000) (0xc000c790e0) Stream removed, broadcasting: 5 May 16 21:38:29.841: INFO: Exec stderr: "" I0516 21:38:29.841709 6 log.go:172] (0xc001482000) Go away received STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 16 21:38:29.841: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9560 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 21:38:29.841: INFO: >>> kubeConfig: /root/.kube/config I0516 21:38:29.874934 6 log.go:172] (0xc002c17810) (0xc00052c000) Create stream I0516 21:38:29.874965 6 log.go:172] (0xc002c17810) (0xc00052c000) Stream added, broadcasting: 1 I0516 21:38:29.885655 6 log.go:172] (0xc002c17810) Reply frame received for 1 I0516 21:38:29.885748 6 log.go:172] (0xc002c17810) (0xc00052d2c0) Create stream I0516 21:38:29.885778 6 log.go:172] (0xc002c17810) (0xc00052d2c0) Stream added, broadcasting: 3 I0516 21:38:29.887536 6 log.go:172] (0xc002c17810) Reply frame received for 3 I0516 21:38:29.887585 6 log.go:172] (0xc002c17810) (0xc00052de00) Create stream I0516 21:38:29.887598 6 log.go:172] (0xc002c17810) (0xc00052de00) Stream added, broadcasting: 5 I0516 21:38:29.891125 6 log.go:172] (0xc002c17810) Reply frame received for 5 I0516 21:38:29.960954 6 log.go:172] (0xc002c17810) Data frame received for 3 I0516 21:38:29.960984 6 log.go:172] (0xc00052d2c0) (3) Data frame handling I0516 21:38:29.961006 6 log.go:172] (0xc00052d2c0) (3) Data frame sent I0516 21:38:29.961022 6 log.go:172] (0xc002c17810) Data frame received for 3 I0516 21:38:29.961034 6 log.go:172] (0xc00052d2c0) (3) Data frame handling I0516 21:38:29.961072 6 log.go:172] (0xc002c17810) Data frame received for 5 I0516 21:38:29.961379 6 log.go:172] (0xc00052de00) (5) Data frame handling I0516 21:38:29.963246 6 log.go:172] (0xc002c17810) Data frame received for 1 I0516 21:38:29.963271 6 log.go:172] (0xc00052c000) (1) Data frame handling I0516 21:38:29.963289 6 log.go:172] (0xc00052c000) (1) Data frame sent I0516 21:38:29.963317 6 log.go:172] (0xc002c17810) (0xc00052c000) Stream removed, broadcasting: 1 I0516 21:38:29.963337 6 log.go:172] (0xc002c17810) Go away received I0516 21:38:29.963483 6 log.go:172] (0xc002c17810) (0xc00052c000) Stream removed, broadcasting: 1 I0516 21:38:29.963503 6 log.go:172] (0xc002c17810) (0xc00052d2c0) Stream removed, broadcasting: 3 I0516 21:38:29.963514 6 log.go:172] (0xc002c17810) (0xc00052de00) Stream removed, broadcasting: 5 May 16 21:38:29.963: INFO: Exec stderr: "" May 16 21:38:29.963: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9560 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 21:38:29.963: INFO: >>> kubeConfig: /root/.kube/config I0516 21:38:29.998084 6 log.go:172] (0xc002c17e40) (0xc000e045a0) Create stream I0516 21:38:29.998110 6 log.go:172] (0xc002c17e40) (0xc000e045a0) Stream added, broadcasting: 1 I0516 21:38:30.000665 6 log.go:172] (0xc002c17e40) Reply frame received for 1 I0516 21:38:30.000702 6 log.go:172] (0xc002c17e40) (0xc0010aba40) Create stream I0516 21:38:30.000718 6 log.go:172] (0xc002c17e40) (0xc0010aba40) Stream added, broadcasting: 3 I0516 21:38:30.002008 6 log.go:172] (0xc002c17e40) Reply frame received for 3 I0516 21:38:30.002047 6 log.go:172] (0xc002c17e40) (0xc000c79720) Create stream I0516 21:38:30.002067 6 log.go:172] (0xc002c17e40) (0xc000c79720) Stream added, broadcasting: 5 I0516 21:38:30.002980 6 log.go:172] (0xc002c17e40) Reply frame received for 5 I0516 21:38:30.066204 6 log.go:172] (0xc002c17e40) Data frame received for 5 I0516 21:38:30.066232 6 log.go:172] (0xc000c79720) (5) Data frame handling I0516 21:38:30.066262 6 log.go:172] (0xc002c17e40) Data frame received for 3 I0516 21:38:30.066306 6 log.go:172] (0xc0010aba40) (3) Data frame handling I0516 21:38:30.066350 6 log.go:172] (0xc0010aba40) (3) Data frame sent I0516 21:38:30.066371 6 log.go:172] (0xc002c17e40) Data frame received for 3 I0516 21:38:30.066387 6 log.go:172] (0xc0010aba40) (3) Data frame handling I0516 21:38:30.067693 6 log.go:172] (0xc002c17e40) Data frame received for 1 I0516 21:38:30.067717 6 log.go:172] (0xc000e045a0) (1) Data frame handling I0516 21:38:30.067728 6 log.go:172] (0xc000e045a0) (1) Data frame sent I0516 21:38:30.067741 6 log.go:172] (0xc002c17e40) (0xc000e045a0) Stream removed, broadcasting: 1 I0516 21:38:30.067764 6 log.go:172] (0xc002c17e40) Go away received I0516 21:38:30.067906 6 log.go:172] (0xc002c17e40) (0xc000e045a0) Stream removed, broadcasting: 1 I0516 21:38:30.067926 6 log.go:172] (0xc002c17e40) (0xc0010aba40) Stream removed, broadcasting: 3 I0516 21:38:30.067940 6 log.go:172] (0xc002c17e40) (0xc000c79720) Stream removed, broadcasting: 5 May 16 21:38:30.067: INFO: Exec stderr: "" May 16 21:38:30.067: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9560 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 21:38:30.068: INFO: >>> kubeConfig: /root/.kube/config I0516 21:38:30.098717 6 log.go:172] (0xc001482630) (0xc000c79f40) Create stream I0516 21:38:30.098745 6 log.go:172] (0xc001482630) (0xc000c79f40) Stream added, broadcasting: 1 I0516 21:38:30.101334 6 log.go:172] (0xc001482630) Reply frame received for 1 I0516 21:38:30.101399 6 log.go:172] (0xc001482630) (0xc0004040a0) Create stream I0516 21:38:30.101413 6 log.go:172] (0xc001482630) (0xc0004040a0) Stream added, broadcasting: 3 I0516 21:38:30.102466 6 log.go:172] (0xc001482630) Reply frame received for 3 I0516 21:38:30.102512 6 log.go:172] (0xc001482630) (0xc0010abae0) Create stream I0516 21:38:30.102535 6 log.go:172] (0xc001482630) (0xc0010abae0) Stream added, broadcasting: 5 I0516 21:38:30.103705 6 log.go:172] (0xc001482630) Reply frame received for 5 I0516 21:38:30.167397 6 log.go:172] (0xc001482630) Data frame received for 3 I0516 21:38:30.167431 6 log.go:172] (0xc0004040a0) (3) Data frame handling I0516 21:38:30.167452 6 log.go:172] (0xc0004040a0) (3) Data frame sent I0516 21:38:30.167472 6 log.go:172] (0xc001482630) Data frame received for 3 I0516 21:38:30.167484 6 log.go:172] (0xc0004040a0) (3) Data frame handling I0516 21:38:30.167523 6 log.go:172] (0xc001482630) Data frame received for 5 I0516 21:38:30.167567 6 log.go:172] (0xc0010abae0) (5) Data frame handling I0516 21:38:30.169543 6 log.go:172] (0xc001482630) Data frame received for 1 I0516 21:38:30.169567 6 log.go:172] (0xc000c79f40) (1) Data frame handling I0516 21:38:30.169579 6 log.go:172] (0xc000c79f40) (1) Data frame sent I0516 21:38:30.169608 6 log.go:172] (0xc001482630) (0xc000c79f40) Stream removed, broadcasting: 1 I0516 21:38:30.169726 6 log.go:172] (0xc001482630) (0xc000c79f40) Stream removed, broadcasting: 1 I0516 21:38:30.169747 6 log.go:172] (0xc001482630) (0xc0004040a0) Stream removed, broadcasting: 3 I0516 21:38:30.169809 6 log.go:172] (0xc001482630) Go away received I0516 21:38:30.169950 6 log.go:172] (0xc001482630) (0xc0010abae0) Stream removed, broadcasting: 5 May 16 21:38:30.169: INFO: Exec stderr: "" May 16 21:38:30.170: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9560 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 21:38:30.170: INFO: >>> kubeConfig: /root/.kube/config I0516 21:38:30.201468 6 log.go:172] (0xc001482c60) (0xc000d4c280) Create stream I0516 21:38:30.201490 6 log.go:172] (0xc001482c60) (0xc000d4c280) Stream added, broadcasting: 1 I0516 21:38:30.203868 6 log.go:172] (0xc001482c60) Reply frame received for 1 I0516 21:38:30.203895 6 log.go:172] (0xc001482c60) (0xc000404320) Create stream I0516 21:38:30.203908 6 log.go:172] (0xc001482c60) (0xc000404320) Stream added, broadcasting: 3 I0516 21:38:30.205315 6 log.go:172] (0xc001482c60) Reply frame received for 3 I0516 21:38:30.205388 6 log.go:172] (0xc001482c60) (0xc0010abcc0) Create stream I0516 21:38:30.205412 6 log.go:172] (0xc001482c60) (0xc0010abcc0) Stream added, broadcasting: 5 I0516 21:38:30.206474 6 log.go:172] (0xc001482c60) Reply frame received for 5 I0516 21:38:30.290282 6 log.go:172] (0xc001482c60) Data frame received for 3 I0516 21:38:30.290339 6 log.go:172] (0xc000404320) (3) Data frame handling I0516 21:38:30.290371 6 log.go:172] (0xc000404320) (3) Data frame sent I0516 21:38:30.290383 6 log.go:172] (0xc001482c60) Data frame received for 3 I0516 21:38:30.290393 6 log.go:172] (0xc000404320) (3) Data frame handling I0516 21:38:30.290524 6 log.go:172] (0xc001482c60) Data frame received for 5 I0516 21:38:30.290544 6 log.go:172] (0xc0010abcc0) (5) Data frame handling I0516 21:38:30.292743 6 log.go:172] (0xc001482c60) Data frame received for 1 I0516 21:38:30.292762 6 log.go:172] (0xc000d4c280) (1) Data frame handling I0516 21:38:30.292775 6 log.go:172] (0xc000d4c280) (1) Data frame sent I0516 21:38:30.292790 6 log.go:172] (0xc001482c60) (0xc000d4c280) Stream removed, broadcasting: 1 I0516 21:38:30.292823 6 log.go:172] (0xc001482c60) Go away received I0516 21:38:30.292914 6 log.go:172] (0xc001482c60) (0xc000d4c280) Stream removed, broadcasting: 1 I0516 21:38:30.292935 6 log.go:172] (0xc001482c60) (0xc000404320) Stream removed, broadcasting: 3 I0516 21:38:30.292947 6 log.go:172] (0xc001482c60) (0xc0010abcc0) Stream removed, broadcasting: 5 May 16 21:38:30.292: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:38:30.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-9560" for this suite. • [SLOW TEST:11.281 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1118,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:38:30.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 16 21:38:30.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9047' May 16 21:38:30.447: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 16 21:38:30.447: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 16 21:38:32.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-9047' May 16 21:38:32.641: INFO: stderr: "" May 16 21:38:32.641: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:38:32.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9047" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":89,"skipped":1132,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:38:32.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 16 21:38:36.918: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:38:37.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8722" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1135,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:38:37.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:38:37.146: INFO: Waiting up to 5m0s for pod "busybox-user-65534-0f101807-6c99-4bf1-9c46-b2c6433927c9" in namespace "security-context-test-9592" to be "success or failure" May 16 21:38:37.149: INFO: Pod "busybox-user-65534-0f101807-6c99-4bf1-9c46-b2c6433927c9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.125729ms May 16 21:38:39.155: INFO: Pod "busybox-user-65534-0f101807-6c99-4bf1-9c46-b2c6433927c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00878815s May 16 21:38:41.159: INFO: Pod "busybox-user-65534-0f101807-6c99-4bf1-9c46-b2c6433927c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012708268s May 16 21:38:41.159: INFO: Pod "busybox-user-65534-0f101807-6c99-4bf1-9c46-b2c6433927c9" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:38:41.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9592" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1138,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:38:41.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 16 21:38:41.290: INFO: PodSpec: initContainers in spec.initContainers May 16 21:39:28.947: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b637dd90-a6a9-4cd5-bb9f-263f87175b56", GenerateName:"", Namespace:"init-container-4372", SelfLink:"/api/v1/namespaces/init-container-4372/pods/pod-init-b637dd90-a6a9-4cd5-bb9f-263f87175b56", UID:"a5f3d98c-7a22-465e-aa56-71f364931e1f", ResourceVersion:"16738420", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725261921, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"290153353"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-5hwlt", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0029d3840), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5hwlt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5hwlt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5hwlt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002a6e138), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00204bf20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a6e1c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a6e1e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002a6e1e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002a6e1ec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725261921, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725261921, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725261921, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725261921, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.106", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.106"}}, StartTime:(*v1.Time)(0xc0035364a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001138e70)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001138ee0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://52f8a2829b3b625e493987aa98a768cd69f96adad39279196a43585ffda9ed66", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0035364e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0035364c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002a6e26f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:39:28.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4372" for this suite. • [SLOW TEST:47.846 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":92,"skipped":1141,"failed":0} S ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:39:29.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-137 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-137 STEP: Deleting pre-stop pod May 16 21:39:42.248: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:39:42.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-137" for this suite. • [SLOW TEST:13.256 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":93,"skipped":1142,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:39:42.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0516 21:40:13.226627 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 16 21:40:13.226: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:40:13.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4994" for this suite. • [SLOW TEST:30.962 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":94,"skipped":1161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:40:13.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 16 21:40:13.298: INFO: Waiting up to 5m0s for pod "pod-8b9550f5-40ed-4d96-8de8-d931e8f4fd9d" in namespace "emptydir-7141" to be "success or failure" May 16 21:40:13.303: INFO: Pod "pod-8b9550f5-40ed-4d96-8de8-d931e8f4fd9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08132ms May 16 21:40:15.325: INFO: Pod "pod-8b9550f5-40ed-4d96-8de8-d931e8f4fd9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02632982s May 16 21:40:17.330: INFO: Pod "pod-8b9550f5-40ed-4d96-8de8-d931e8f4fd9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031339864s STEP: Saw pod success May 16 21:40:17.330: INFO: Pod "pod-8b9550f5-40ed-4d96-8de8-d931e8f4fd9d" satisfied condition "success or failure" May 16 21:40:17.333: INFO: Trying to get logs from node jerma-worker pod pod-8b9550f5-40ed-4d96-8de8-d931e8f4fd9d container test-container: STEP: delete the pod May 16 21:40:17.437: INFO: Waiting for pod pod-8b9550f5-40ed-4d96-8de8-d931e8f4fd9d to disappear May 16 21:40:17.630: INFO: Pod pod-8b9550f5-40ed-4d96-8de8-d931e8f4fd9d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:40:17.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7141" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1204,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:40:17.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 16 21:40:17.710: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:40:25.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9269" for this suite. • [SLOW TEST:7.577 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":96,"skipped":1221,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:40:25.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-937 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-937;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-937 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-937;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-937.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-937.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-937.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-937.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-937.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-937.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-937.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-937.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-937.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-937.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-937.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-937.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-937.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 51.219.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.219.51_udp@PTR;check="$$(dig +tcp +noall +answer +search 51.219.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.219.51_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-937 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-937;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-937 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-937;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-937.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-937.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-937.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-937.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-937.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-937.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-937.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-937.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-937.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-937.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-937.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-937.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-937.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 51.219.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.219.51_udp@PTR;check="$$(dig +tcp +noall +answer +search 51.219.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.219.51_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 21:40:31.871: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:31.875: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:31.878: INFO: Unable to read wheezy_udp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:31.881: INFO: Unable to read wheezy_tcp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:31.884: INFO: Unable to read wheezy_udp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:31.887: INFO: Unable to read wheezy_tcp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:31.890: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:31.893: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:31.913: INFO: Unable to read jessie_udp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:31.916: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:31.919: INFO: Unable to read jessie_udp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:31.922: INFO: Unable to read jessie_tcp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:31.925: INFO: Unable to read jessie_udp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:31.927: INFO: Unable to read jessie_tcp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:31.930: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:31.932: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:31.958: INFO: Lookups using dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-937 wheezy_tcp@dns-test-service.dns-937 wheezy_udp@dns-test-service.dns-937.svc wheezy_tcp@dns-test-service.dns-937.svc wheezy_udp@_http._tcp.dns-test-service.dns-937.svc wheezy_tcp@_http._tcp.dns-test-service.dns-937.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-937 jessie_tcp@dns-test-service.dns-937 jessie_udp@dns-test-service.dns-937.svc jessie_tcp@dns-test-service.dns-937.svc jessie_udp@_http._tcp.dns-test-service.dns-937.svc jessie_tcp@_http._tcp.dns-test-service.dns-937.svc] May 16 21:40:36.962: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:36.965: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:36.968: INFO: Unable to read wheezy_udp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:36.971: INFO: Unable to read wheezy_tcp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:36.974: INFO: Unable to read wheezy_udp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:36.976: INFO: Unable to read wheezy_tcp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:36.980: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:36.982: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:36.999: INFO: Unable to read jessie_udp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:37.002: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:37.006: INFO: Unable to read jessie_udp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:37.010: INFO: Unable to read jessie_tcp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:37.012: INFO: Unable to read jessie_udp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:37.014: INFO: Unable to read jessie_tcp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:37.016: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:37.019: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:37.033: INFO: Lookups using dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-937 wheezy_tcp@dns-test-service.dns-937 wheezy_udp@dns-test-service.dns-937.svc wheezy_tcp@dns-test-service.dns-937.svc wheezy_udp@_http._tcp.dns-test-service.dns-937.svc wheezy_tcp@_http._tcp.dns-test-service.dns-937.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-937 jessie_tcp@dns-test-service.dns-937 jessie_udp@dns-test-service.dns-937.svc jessie_tcp@dns-test-service.dns-937.svc jessie_udp@_http._tcp.dns-test-service.dns-937.svc jessie_tcp@_http._tcp.dns-test-service.dns-937.svc] May 16 21:40:41.962: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:41.965: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:41.968: INFO: Unable to read wheezy_udp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:41.971: INFO: Unable to read wheezy_tcp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:41.973: INFO: Unable to read wheezy_udp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:41.976: INFO: Unable to read wheezy_tcp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:41.978: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:41.981: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:41.999: INFO: Unable to read jessie_udp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:42.002: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:42.005: INFO: Unable to read jessie_udp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:42.007: INFO: Unable to read jessie_tcp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:42.010: INFO: Unable to read jessie_udp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:42.013: INFO: Unable to read jessie_tcp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:42.016: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:42.019: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:42.036: INFO: Lookups using dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-937 wheezy_tcp@dns-test-service.dns-937 wheezy_udp@dns-test-service.dns-937.svc wheezy_tcp@dns-test-service.dns-937.svc wheezy_udp@_http._tcp.dns-test-service.dns-937.svc wheezy_tcp@_http._tcp.dns-test-service.dns-937.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-937 jessie_tcp@dns-test-service.dns-937 jessie_udp@dns-test-service.dns-937.svc jessie_tcp@dns-test-service.dns-937.svc jessie_udp@_http._tcp.dns-test-service.dns-937.svc jessie_tcp@_http._tcp.dns-test-service.dns-937.svc] May 16 21:40:46.962: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:46.966: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:46.968: INFO: Unable to read wheezy_udp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:46.971: INFO: Unable to read wheezy_tcp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:46.974: INFO: Unable to read wheezy_udp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:46.977: INFO: Unable to read wheezy_tcp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:46.980: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:46.982: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:47.003: INFO: Unable to read jessie_udp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:47.006: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:47.009: INFO: Unable to read jessie_udp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:47.012: INFO: Unable to read jessie_tcp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:47.015: INFO: Unable to read jessie_udp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:47.018: INFO: Unable to read jessie_tcp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:47.021: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:47.024: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:47.042: INFO: Lookups using dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-937 wheezy_tcp@dns-test-service.dns-937 wheezy_udp@dns-test-service.dns-937.svc wheezy_tcp@dns-test-service.dns-937.svc wheezy_udp@_http._tcp.dns-test-service.dns-937.svc wheezy_tcp@_http._tcp.dns-test-service.dns-937.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-937 jessie_tcp@dns-test-service.dns-937 jessie_udp@dns-test-service.dns-937.svc jessie_tcp@dns-test-service.dns-937.svc jessie_udp@_http._tcp.dns-test-service.dns-937.svc jessie_tcp@_http._tcp.dns-test-service.dns-937.svc] May 16 21:40:51.963: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:51.966: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:51.968: INFO: Unable to read wheezy_udp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:51.971: INFO: Unable to read wheezy_tcp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:51.974: INFO: Unable to read wheezy_udp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:51.977: INFO: Unable to read wheezy_tcp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:51.981: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:51.984: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:52.007: INFO: Unable to read jessie_udp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:52.010: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:52.012: INFO: Unable to read jessie_udp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:52.014: INFO: Unable to read jessie_tcp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:52.016: INFO: Unable to read jessie_udp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:52.019: INFO: Unable to read jessie_tcp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:52.021: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:52.023: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:52.039: INFO: Lookups using dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-937 wheezy_tcp@dns-test-service.dns-937 wheezy_udp@dns-test-service.dns-937.svc wheezy_tcp@dns-test-service.dns-937.svc wheezy_udp@_http._tcp.dns-test-service.dns-937.svc wheezy_tcp@_http._tcp.dns-test-service.dns-937.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-937 jessie_tcp@dns-test-service.dns-937 jessie_udp@dns-test-service.dns-937.svc jessie_tcp@dns-test-service.dns-937.svc jessie_udp@_http._tcp.dns-test-service.dns-937.svc jessie_tcp@_http._tcp.dns-test-service.dns-937.svc] May 16 21:40:56.963: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:56.968: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:56.972: INFO: Unable to read wheezy_udp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:56.975: INFO: Unable to read wheezy_tcp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:56.978: INFO: Unable to read wheezy_udp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:56.981: INFO: Unable to read wheezy_tcp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:56.984: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:56.986: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:57.004: INFO: Unable to read jessie_udp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:57.007: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:57.009: INFO: Unable to read jessie_udp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:57.012: INFO: Unable to read jessie_tcp@dns-test-service.dns-937 from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:57.014: INFO: Unable to read jessie_udp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:57.016: INFO: Unable to read jessie_tcp@dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:57.018: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:57.021: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:40:57.038: INFO: Lookups using dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-937 wheezy_tcp@dns-test-service.dns-937 wheezy_udp@dns-test-service.dns-937.svc wheezy_tcp@dns-test-service.dns-937.svc wheezy_udp@_http._tcp.dns-test-service.dns-937.svc wheezy_tcp@_http._tcp.dns-test-service.dns-937.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-937 jessie_tcp@dns-test-service.dns-937 jessie_udp@dns-test-service.dns-937.svc jessie_tcp@dns-test-service.dns-937.svc jessie_udp@_http._tcp.dns-test-service.dns-937.svc jessie_tcp@_http._tcp.dns-test-service.dns-937.svc] May 16 21:41:01.985: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-937.svc from pod dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc: the server could not find the requested resource (get pods dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc) May 16 21:41:02.091: INFO: Lookups using dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-937.svc] May 16 21:41:07.043: INFO: DNS probes using dns-937/dns-test-e589c5b6-af8a-4fc8-a0a8-3ddebfbc05cc succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:41:07.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-937" for this suite. • [SLOW TEST:42.653 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":97,"skipped":1227,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:41:07.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-649 STEP: creating a selector STEP: Creating the service pods in kubernetes May 16 21:41:07.944: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 16 21:41:30.093: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.110:8080/dial?request=hostname&protocol=udp&host=10.244.1.56&port=8081&tries=1'] Namespace:pod-network-test-649 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 21:41:30.093: INFO: >>> kubeConfig: /root/.kube/config I0516 21:41:30.124280 6 log.go:172] (0xc002517e40) (0xc00212e3c0) Create stream I0516 21:41:30.124312 6 log.go:172] (0xc002517e40) (0xc00212e3c0) Stream added, broadcasting: 1 I0516 21:41:30.126381 6 log.go:172] (0xc002517e40) Reply frame received for 1 I0516 21:41:30.126422 6 log.go:172] (0xc002517e40) (0xc002054640) Create stream I0516 21:41:30.126453 6 log.go:172] (0xc002517e40) (0xc002054640) Stream added, broadcasting: 3 I0516 21:41:30.127520 6 log.go:172] (0xc002517e40) Reply frame received for 3 I0516 21:41:30.127560 6 log.go:172] (0xc002517e40) (0xc0020bcc80) Create stream I0516 21:41:30.127577 6 log.go:172] (0xc002517e40) (0xc0020bcc80) Stream added, broadcasting: 5 I0516 21:41:30.128680 6 log.go:172] (0xc002517e40) Reply frame received for 5 I0516 21:41:30.262269 6 log.go:172] (0xc002517e40) Data frame received for 3 I0516 21:41:30.262293 6 log.go:172] (0xc002054640) (3) Data frame handling I0516 21:41:30.262301 6 log.go:172] (0xc002054640) (3) Data frame sent I0516 21:41:30.263522 6 log.go:172] (0xc002517e40) Data frame received for 3 I0516 21:41:30.263619 6 log.go:172] (0xc002054640) (3) Data frame handling I0516 21:41:30.263893 6 log.go:172] (0xc002517e40) Data frame received for 5 I0516 21:41:30.263928 6 log.go:172] (0xc0020bcc80) (5) Data frame handling I0516 21:41:30.266261 6 log.go:172] (0xc002517e40) Data frame received for 1 I0516 21:41:30.266329 6 log.go:172] (0xc00212e3c0) (1) Data frame handling I0516 21:41:30.266349 6 log.go:172] (0xc00212e3c0) (1) Data frame sent I0516 21:41:30.266364 6 log.go:172] (0xc002517e40) (0xc00212e3c0) Stream removed, broadcasting: 1 I0516 21:41:30.266454 6 log.go:172] (0xc002517e40) Go away received I0516 21:41:30.266540 6 log.go:172] (0xc002517e40) (0xc00212e3c0) Stream removed, broadcasting: 1 I0516 21:41:30.266567 6 log.go:172] (0xc002517e40) (0xc002054640) Stream removed, broadcasting: 3 I0516 21:41:30.266591 6 log.go:172] (0xc002517e40) (0xc0020bcc80) Stream removed, broadcasting: 5 May 16 21:41:30.266: INFO: Waiting for responses: map[] May 16 21:41:30.269: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.110:8080/dial?request=hostname&protocol=udp&host=10.244.2.109&port=8081&tries=1'] Namespace:pod-network-test-649 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 21:41:30.269: INFO: >>> kubeConfig: /root/.kube/config I0516 21:41:30.297967 6 log.go:172] (0xc002ca44d0) (0xc0020bd0e0) Create stream I0516 21:41:30.297993 6 log.go:172] (0xc002ca44d0) (0xc0020bd0e0) Stream added, broadcasting: 1 I0516 21:41:30.299683 6 log.go:172] (0xc002ca44d0) Reply frame received for 1 I0516 21:41:30.299733 6 log.go:172] (0xc002ca44d0) (0xc00212e460) Create stream I0516 21:41:30.299747 6 log.go:172] (0xc002ca44d0) (0xc00212e460) Stream added, broadcasting: 3 I0516 21:41:30.300762 6 log.go:172] (0xc002ca44d0) Reply frame received for 3 I0516 21:41:30.300802 6 log.go:172] (0xc002ca44d0) (0xc00212e500) Create stream I0516 21:41:30.300815 6 log.go:172] (0xc002ca44d0) (0xc00212e500) Stream added, broadcasting: 5 I0516 21:41:30.302031 6 log.go:172] (0xc002ca44d0) Reply frame received for 5 I0516 21:41:30.369289 6 log.go:172] (0xc002ca44d0) Data frame received for 3 I0516 21:41:30.369333 6 log.go:172] (0xc00212e460) (3) Data frame handling I0516 21:41:30.369368 6 log.go:172] (0xc00212e460) (3) Data frame sent I0516 21:41:30.370140 6 log.go:172] (0xc002ca44d0) Data frame received for 3 I0516 21:41:30.370186 6 log.go:172] (0xc00212e460) (3) Data frame handling I0516 21:41:30.370220 6 log.go:172] (0xc002ca44d0) Data frame received for 5 I0516 21:41:30.370240 6 log.go:172] (0xc00212e500) (5) Data frame handling I0516 21:41:30.371777 6 log.go:172] (0xc002ca44d0) Data frame received for 1 I0516 21:41:30.371813 6 log.go:172] (0xc0020bd0e0) (1) Data frame handling I0516 21:41:30.371840 6 log.go:172] (0xc0020bd0e0) (1) Data frame sent I0516 21:41:30.372006 6 log.go:172] (0xc002ca44d0) (0xc0020bd0e0) Stream removed, broadcasting: 1 I0516 21:41:30.372058 6 log.go:172] (0xc002ca44d0) Go away received I0516 21:41:30.372142 6 log.go:172] (0xc002ca44d0) (0xc0020bd0e0) Stream removed, broadcasting: 1 I0516 21:41:30.372153 6 log.go:172] (0xc002ca44d0) (0xc00212e460) Stream removed, broadcasting: 3 I0516 21:41:30.372159 6 log.go:172] (0xc002ca44d0) (0xc00212e500) Stream removed, broadcasting: 5 May 16 21:41:30.372: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:41:30.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-649" for this suite. • [SLOW TEST:22.510 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1247,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:41:30.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:41:30.468: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 16 21:41:32.549: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:41:33.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8636" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":99,"skipped":1252,"failed":0} ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:41:33.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:41:34.268: INFO: Creating ReplicaSet my-hostname-basic-ede060aa-9680-4cad-8a07-ec6a81f6cb8b May 16 21:41:34.482: INFO: Pod name my-hostname-basic-ede060aa-9680-4cad-8a07-ec6a81f6cb8b: Found 0 pods out of 1 May 16 21:41:39.699: INFO: Pod name my-hostname-basic-ede060aa-9680-4cad-8a07-ec6a81f6cb8b: Found 1 pods out of 1 May 16 21:41:39.699: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ede060aa-9680-4cad-8a07-ec6a81f6cb8b" is running May 16 21:41:39.702: INFO: Pod "my-hostname-basic-ede060aa-9680-4cad-8a07-ec6a81f6cb8b-82qwg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 21:41:34 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 21:41:39 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 21:41:39 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 21:41:34 +0000 UTC Reason: Message:}]) May 16 21:41:39.702: INFO: Trying to dial the pod May 16 21:41:44.715: INFO: Controller my-hostname-basic-ede060aa-9680-4cad-8a07-ec6a81f6cb8b: Got expected result from replica 1 [my-hostname-basic-ede060aa-9680-4cad-8a07-ec6a81f6cb8b-82qwg]: "my-hostname-basic-ede060aa-9680-4cad-8a07-ec6a81f6cb8b-82qwg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:41:44.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3967" for this suite. • [SLOW TEST:10.994 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":100,"skipped":1252,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:41:44.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:41:44.874: INFO: Create a RollingUpdate DaemonSet May 16 21:41:44.878: INFO: Check that daemon pods launch on every node of the cluster May 16 21:41:44.885: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:41:44.890: INFO: Number of nodes with available pods: 0 May 16 21:41:44.890: INFO: Node jerma-worker is running more than one daemon pod May 16 21:41:45.896: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:41:45.899: INFO: Number of nodes with available pods: 0 May 16 21:41:45.899: INFO: Node jerma-worker is running more than one daemon pod May 16 21:41:47.054: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:41:47.092: INFO: Number of nodes with available pods: 0 May 16 21:41:47.092: INFO: Node jerma-worker is running more than one daemon pod May 16 21:41:47.894: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:41:47.897: INFO: Number of nodes with available pods: 0 May 16 21:41:47.897: INFO: Node jerma-worker is running more than one daemon pod May 16 21:41:48.895: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:41:48.899: INFO: Number of nodes with available pods: 0 May 16 21:41:48.899: INFO: Node jerma-worker is running more than one daemon pod May 16 21:41:49.912: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:41:49.930: INFO: Number of nodes with available pods: 2 May 16 21:41:49.930: INFO: Number of running nodes: 2, number of available pods: 2 May 16 21:41:49.930: INFO: Update the DaemonSet to trigger a rollout May 16 21:41:49.967: INFO: Updating DaemonSet daemon-set May 16 21:41:59.984: INFO: Roll back the DaemonSet before rollout is complete May 16 21:41:59.990: INFO: Updating DaemonSet daemon-set May 16 21:41:59.990: INFO: Make sure DaemonSet rollback is complete May 16 21:42:00.002: INFO: Wrong image for pod: daemon-set-ctnc5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 16 21:42:00.002: INFO: Pod daemon-set-ctnc5 is not available May 16 21:42:00.064: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:42:01.069: INFO: Wrong image for pod: daemon-set-ctnc5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 16 21:42:01.069: INFO: Pod daemon-set-ctnc5 is not available May 16 21:42:01.074: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 21:42:02.067: INFO: Pod daemon-set-vvb94 is not available May 16 21:42:02.071: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9598, will wait for the garbage collector to delete the pods May 16 21:42:02.143: INFO: Deleting DaemonSet.extensions daemon-set took: 15.633904ms May 16 21:42:02.443: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.27473ms May 16 21:42:09.547: INFO: Number of nodes with available pods: 0 May 16 21:42:09.547: INFO: Number of running nodes: 0, number of available pods: 0 May 16 21:42:09.550: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9598/daemonsets","resourceVersion":"16739355"},"items":null} May 16 21:42:09.553: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9598/pods","resourceVersion":"16739355"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:42:09.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9598" for this suite. • [SLOW TEST:24.818 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":101,"skipped":1253,"failed":0} SS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:42:09.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 16 21:42:13.671: INFO: &Pod{ObjectMeta:{send-events-d3441ccb-6c5f-4518-b29d-1afa983e04cf events-5238 /api/v1/namespaces/events-5238/pods/send-events-d3441ccb-6c5f-4518-b29d-1afa983e04cf 7825494e-ae28-4a18-abe3-727949f4d305 16739377 0 2020-05-16 21:42:09 +0000 UTC map[name:foo time:636152838] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l92lt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l92lt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l92lt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:42:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:42:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:42:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 21:42:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.114,StartTime:2020-05-16 21:42:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 21:42:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://043d823c589fbef4a3db536f26fb9eb449fa853aa5d808c4a9bc0286cb07881a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.114,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 16 21:42:15.677: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 16 21:42:17.682: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:42:17.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5238" for this suite. • [SLOW TEST:8.180 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":102,"skipped":1255,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:42:17.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 16 21:42:17.835: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7171 /api/v1/namespaces/watch-7171/configmaps/e2e-watch-test-label-changed 8ce4a6f9-a8ed-4e24-8b52-0a482b1a0bce 16739426 0 2020-05-16 21:42:17 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 16 21:42:17.835: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7171 /api/v1/namespaces/watch-7171/configmaps/e2e-watch-test-label-changed 8ce4a6f9-a8ed-4e24-8b52-0a482b1a0bce 16739427 0 2020-05-16 21:42:17 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 16 21:42:17.835: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7171 /api/v1/namespaces/watch-7171/configmaps/e2e-watch-test-label-changed 8ce4a6f9-a8ed-4e24-8b52-0a482b1a0bce 16739428 0 2020-05-16 21:42:17 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 16 21:42:27.876: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7171 /api/v1/namespaces/watch-7171/configmaps/e2e-watch-test-label-changed 8ce4a6f9-a8ed-4e24-8b52-0a482b1a0bce 16739469 0 2020-05-16 21:42:17 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 16 21:42:27.876: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7171 /api/v1/namespaces/watch-7171/configmaps/e2e-watch-test-label-changed 8ce4a6f9-a8ed-4e24-8b52-0a482b1a0bce 16739470 0 2020-05-16 21:42:17 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 16 21:42:27.876: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7171 /api/v1/namespaces/watch-7171/configmaps/e2e-watch-test-label-changed 8ce4a6f9-a8ed-4e24-8b52-0a482b1a0bce 16739471 0 2020-05-16 21:42:17 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:42:27.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7171" for this suite. • [SLOW TEST:10.164 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":103,"skipped":1264,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:42:27.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:43:28.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2895" for this suite. • [SLOW TEST:60.133 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1277,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:43:28.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-d3c05bac-67f7-4b7d-9cc3-22d05c897187 STEP: Creating a pod to test consume configMaps May 16 21:43:28.106: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d11e3e82-f7a2-43c0-9da9-554bfe213758" in namespace "projected-532" to be "success or failure" May 16 21:43:28.110: INFO: Pod "pod-projected-configmaps-d11e3e82-f7a2-43c0-9da9-554bfe213758": Phase="Pending", Reason="", readiness=false. Elapsed: 4.538987ms May 16 21:43:30.202: INFO: Pod "pod-projected-configmaps-d11e3e82-f7a2-43c0-9da9-554bfe213758": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096006823s May 16 21:43:32.238: INFO: Pod "pod-projected-configmaps-d11e3e82-f7a2-43c0-9da9-554bfe213758": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132200898s STEP: Saw pod success May 16 21:43:32.238: INFO: Pod "pod-projected-configmaps-d11e3e82-f7a2-43c0-9da9-554bfe213758" satisfied condition "success or failure" May 16 21:43:32.241: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-d11e3e82-f7a2-43c0-9da9-554bfe213758 container projected-configmap-volume-test: STEP: delete the pod May 16 21:43:32.828: INFO: Waiting for pod pod-projected-configmaps-d11e3e82-f7a2-43c0-9da9-554bfe213758 to disappear May 16 21:43:32.944: INFO: Pod pod-projected-configmaps-d11e3e82-f7a2-43c0-9da9-554bfe213758 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:43:32.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-532" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1298,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:43:32.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 16 21:43:33.016: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c09988d8-77b8-445f-94bd-acb4cd6361c2" in namespace "projected-2046" to be "success or failure" May 16 21:43:33.020: INFO: Pod "downwardapi-volume-c09988d8-77b8-445f-94bd-acb4cd6361c2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.923087ms May 16 21:43:35.035: INFO: Pod "downwardapi-volume-c09988d8-77b8-445f-94bd-acb4cd6361c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018273422s May 16 21:43:37.039: INFO: Pod "downwardapi-volume-c09988d8-77b8-445f-94bd-acb4cd6361c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022957798s STEP: Saw pod success May 16 21:43:37.039: INFO: Pod "downwardapi-volume-c09988d8-77b8-445f-94bd-acb4cd6361c2" satisfied condition "success or failure" May 16 21:43:37.043: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c09988d8-77b8-445f-94bd-acb4cd6361c2 container client-container: STEP: delete the pod May 16 21:43:37.085: INFO: Waiting for pod downwardapi-volume-c09988d8-77b8-445f-94bd-acb4cd6361c2 to disappear May 16 21:43:37.091: INFO: Pod downwardapi-volume-c09988d8-77b8-445f-94bd-acb4cd6361c2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:43:37.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2046" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1301,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:43:37.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 16 21:43:37.182: INFO: namespace kubectl-542 May 16 21:43:37.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-542' May 16 21:43:40.314: INFO: stderr: "" May 16 21:43:40.314: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 16 21:43:41.321: INFO: Selector matched 1 pods for map[app:agnhost] May 16 21:43:41.321: INFO: Found 0 / 1 May 16 21:43:42.317: INFO: Selector matched 1 pods for map[app:agnhost] May 16 21:43:42.317: INFO: Found 0 / 1 May 16 21:43:43.318: INFO: Selector matched 1 pods for map[app:agnhost] May 16 21:43:43.318: INFO: Found 0 / 1 May 16 21:43:44.319: INFO: Selector matched 1 pods for map[app:agnhost] May 16 21:43:44.319: INFO: Found 1 / 1 May 16 21:43:44.319: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 16 21:43:44.322: INFO: Selector matched 1 pods for map[app:agnhost] May 16 21:43:44.322: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 16 21:43:44.322: INFO: wait on agnhost-master startup in kubectl-542 May 16 21:43:44.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-x94fs agnhost-master --namespace=kubectl-542' May 16 21:43:44.438: INFO: stderr: "" May 16 21:43:44.438: INFO: stdout: "Paused\n" STEP: exposing RC May 16 21:43:44.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-542' May 16 21:43:44.586: INFO: stderr: "" May 16 21:43:44.586: INFO: stdout: "service/rm2 exposed\n" May 16 21:43:44.634: INFO: Service rm2 in namespace kubectl-542 found. STEP: exposing service May 16 21:43:46.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-542' May 16 21:43:46.779: INFO: stderr: "" May 16 21:43:46.779: INFO: stdout: "service/rm3 exposed\n" May 16 21:43:46.831: INFO: Service rm3 in namespace kubectl-542 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:43:48.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-542" for this suite. • [SLOW TEST:11.747 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":107,"skipped":1305,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:43:48.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 16 21:43:53.962: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:43:53.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6457" for this suite. • [SLOW TEST:5.491 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":108,"skipped":1337,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:43:54.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7123.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7123.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7123.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7123.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7123.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7123.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7123.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7123.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7123.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7123.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 21:44:03.073: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:03.077: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:03.081: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:03.107: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:03.141: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:03.145: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:03.148: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:03.153: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:03.159: INFO: Lookups using dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7123.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7123.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local jessie_udp@dns-test-service-2.dns-7123.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7123.svc.cluster.local] May 16 21:44:08.164: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:08.168: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:08.171: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:08.175: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:08.185: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:08.189: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:08.192: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:08.195: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:08.201: INFO: Lookups using dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7123.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7123.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local jessie_udp@dns-test-service-2.dns-7123.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7123.svc.cluster.local] May 16 21:44:13.164: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:13.168: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:13.171: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:13.174: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:13.182: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:13.185: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:13.188: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:13.191: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:13.197: INFO: Lookups using dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7123.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7123.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local jessie_udp@dns-test-service-2.dns-7123.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7123.svc.cluster.local] May 16 21:44:18.164: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:18.168: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:18.172: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:18.175: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:18.185: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:18.188: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:18.190: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:18.192: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:18.198: INFO: Lookups using dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7123.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7123.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local jessie_udp@dns-test-service-2.dns-7123.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7123.svc.cluster.local] May 16 21:44:23.164: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:23.167: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:23.170: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:23.173: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:23.191: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:23.194: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:23.196: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:23.199: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:23.210: INFO: Lookups using dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7123.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7123.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local jessie_udp@dns-test-service-2.dns-7123.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7123.svc.cluster.local] May 16 21:44:28.163: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:28.166: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:28.168: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:28.170: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:28.176: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:28.178: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:28.181: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:28.183: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7123.svc.cluster.local from pod dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98: the server could not find the requested resource (get pods dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98) May 16 21:44:28.188: INFO: Lookups using dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7123.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7123.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7123.svc.cluster.local jessie_udp@dns-test-service-2.dns-7123.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7123.svc.cluster.local] May 16 21:44:33.220: INFO: DNS probes using dns-7123/dns-test-be204e82-cb8a-45cb-94c9-25b43cbc1b98 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:44:33.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7123" for this suite. • [SLOW TEST:39.560 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":109,"skipped":1348,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:44:33.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:44:33.964: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-b2a01e62-68c3-466b-9ce7-17eafbc8bab3" in namespace "security-context-test-550" to be "success or failure" May 16 21:44:33.983: INFO: Pod "alpine-nnp-false-b2a01e62-68c3-466b-9ce7-17eafbc8bab3": Phase="Pending", Reason="", readiness=false. Elapsed: 19.247594ms May 16 21:44:36.158: INFO: Pod "alpine-nnp-false-b2a01e62-68c3-466b-9ce7-17eafbc8bab3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193898159s May 16 21:44:38.173: INFO: Pod "alpine-nnp-false-b2a01e62-68c3-466b-9ce7-17eafbc8bab3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.209251703s May 16 21:44:38.173: INFO: Pod "alpine-nnp-false-b2a01e62-68c3-466b-9ce7-17eafbc8bab3" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:44:38.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-550" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1398,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:44:38.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 16 21:44:38.852: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-945" to be "success or failure" May 16 21:44:38.859: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 7.035802ms May 16 21:44:40.863: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011312584s May 16 21:44:42.867: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014908601s May 16 21:44:44.871: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018557833s STEP: Saw pod success May 16 21:44:44.871: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 16 21:44:44.873: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 16 21:44:44.947: INFO: Waiting for pod pod-host-path-test to disappear May 16 21:44:44.955: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:44:44.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-945" for this suite. • [SLOW TEST:6.221 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1457,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:44:44.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-715bfca8-9b66-45cf-bbcd-5552dd168495 May 16 21:44:45.033: INFO: Pod name my-hostname-basic-715bfca8-9b66-45cf-bbcd-5552dd168495: Found 0 pods out of 1 May 16 21:44:50.040: INFO: Pod name my-hostname-basic-715bfca8-9b66-45cf-bbcd-5552dd168495: Found 1 pods out of 1 May 16 21:44:50.040: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-715bfca8-9b66-45cf-bbcd-5552dd168495" are running May 16 21:44:50.046: INFO: Pod "my-hostname-basic-715bfca8-9b66-45cf-bbcd-5552dd168495-v9zts" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 21:44:45 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 21:44:47 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 21:44:47 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 21:44:45 +0000 UTC Reason: Message:}]) May 16 21:44:50.046: INFO: Trying to dial the pod May 16 21:44:55.058: INFO: Controller my-hostname-basic-715bfca8-9b66-45cf-bbcd-5552dd168495: Got expected result from replica 1 [my-hostname-basic-715bfca8-9b66-45cf-bbcd-5552dd168495-v9zts]: "my-hostname-basic-715bfca8-9b66-45cf-bbcd-5552dd168495-v9zts", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:44:55.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4336" for this suite. • [SLOW TEST:10.104 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":112,"skipped":1551,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:44:55.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:44:55.222: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:44:55.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9894" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":113,"skipped":1566,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:44:55.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 16 21:44:55.980: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 16 21:44:55.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3300' May 16 21:44:56.366: INFO: stderr: "" May 16 21:44:56.366: INFO: stdout: "service/agnhost-slave created\n" May 16 21:44:56.366: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 16 21:44:56.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3300' May 16 21:44:56.739: INFO: stderr: "" May 16 21:44:56.739: INFO: stdout: "service/agnhost-master created\n" May 16 21:44:56.739: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 16 21:44:56.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3300' May 16 21:44:57.056: INFO: stderr: "" May 16 21:44:57.056: INFO: stdout: "service/frontend created\n" May 16 21:44:57.057: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 16 21:44:57.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3300' May 16 21:44:57.341: INFO: stderr: "" May 16 21:44:57.341: INFO: stdout: "deployment.apps/frontend created\n" May 16 21:44:57.342: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 16 21:44:57.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3300' May 16 21:44:57.671: INFO: stderr: "" May 16 21:44:57.671: INFO: stdout: "deployment.apps/agnhost-master created\n" May 16 21:44:57.671: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 16 21:44:57.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3300' May 16 21:44:57.931: INFO: stderr: "" May 16 21:44:57.931: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 16 21:44:57.931: INFO: Waiting for all frontend pods to be Running. May 16 21:45:07.981: INFO: Waiting for frontend to serve content. May 16 21:45:07.993: INFO: Trying to add a new entry to the guestbook. May 16 21:45:08.003: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 16 21:45:08.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3300' May 16 21:45:08.239: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 21:45:08.239: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 16 21:45:08.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3300' May 16 21:45:08.402: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 21:45:08.402: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 16 21:45:08.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3300' May 16 21:45:08.530: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 21:45:08.531: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 16 21:45:08.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3300' May 16 21:45:08.650: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 21:45:08.650: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 16 21:45:08.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3300' May 16 21:45:08.784: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 21:45:08.784: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 16 21:45:08.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3300' May 16 21:45:08.902: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 21:45:08.903: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:45:08.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3300" for this suite. • [SLOW TEST:13.044 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":114,"skipped":1611,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:45:08.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 16 21:45:09.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2399' May 16 21:45:09.678: INFO: stderr: "" May 16 21:45:09.678: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 16 21:45:09.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2399' May 16 21:45:19.233: INFO: stderr: "" May 16 21:45:19.233: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:45:19.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2399" for this suite. • [SLOW TEST:10.337 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":115,"skipped":1614,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:45:19.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7358 STEP: creating a selector STEP: Creating the service pods in kubernetes May 16 21:45:19.397: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 16 21:45:43.538: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.70:8080/dial?request=hostname&protocol=http&host=10.244.1.69&port=8080&tries=1'] Namespace:pod-network-test-7358 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 21:45:43.538: INFO: >>> kubeConfig: /root/.kube/config I0516 21:45:43.574479 6 log.go:172] (0xc001f92c60) (0xc001133360) Create stream I0516 21:45:43.574514 6 log.go:172] (0xc001f92c60) (0xc001133360) Stream added, broadcasting: 1 I0516 21:45:43.576981 6 log.go:172] (0xc001f92c60) Reply frame received for 1 I0516 21:45:43.577021 6 log.go:172] (0xc001f92c60) (0xc000ae60a0) Create stream I0516 21:45:43.577038 6 log.go:172] (0xc001f92c60) (0xc000ae60a0) Stream added, broadcasting: 3 I0516 21:45:43.578377 6 log.go:172] (0xc001f92c60) Reply frame received for 3 I0516 21:45:43.578414 6 log.go:172] (0xc001f92c60) (0xc0011337c0) Create stream I0516 21:45:43.578428 6 log.go:172] (0xc001f92c60) (0xc0011337c0) Stream added, broadcasting: 5 I0516 21:45:43.579359 6 log.go:172] (0xc001f92c60) Reply frame received for 5 I0516 21:45:43.698097 6 log.go:172] (0xc001f92c60) Data frame received for 3 I0516 21:45:43.698126 6 log.go:172] (0xc000ae60a0) (3) Data frame handling I0516 21:45:43.698150 6 log.go:172] (0xc000ae60a0) (3) Data frame sent I0516 21:45:43.699162 6 log.go:172] (0xc001f92c60) Data frame received for 3 I0516 21:45:43.699192 6 log.go:172] (0xc000ae60a0) (3) Data frame handling I0516 21:45:43.699469 6 log.go:172] (0xc001f92c60) Data frame received for 5 I0516 21:45:43.699484 6 log.go:172] (0xc0011337c0) (5) Data frame handling I0516 21:45:43.701848 6 log.go:172] (0xc001f92c60) Data frame received for 1 I0516 21:45:43.701871 6 log.go:172] (0xc001133360) (1) Data frame handling I0516 21:45:43.701901 6 log.go:172] (0xc001133360) (1) Data frame sent I0516 21:45:43.701919 6 log.go:172] (0xc001f92c60) (0xc001133360) Stream removed, broadcasting: 1 I0516 21:45:43.701947 6 log.go:172] (0xc001f92c60) Go away received I0516 21:45:43.702031 6 log.go:172] (0xc001f92c60) (0xc001133360) Stream removed, broadcasting: 1 I0516 21:45:43.702050 6 log.go:172] (0xc001f92c60) (0xc000ae60a0) Stream removed, broadcasting: 3 I0516 21:45:43.702061 6 log.go:172] (0xc001f92c60) (0xc0011337c0) Stream removed, broadcasting: 5 May 16 21:45:43.702: INFO: Waiting for responses: map[] May 16 21:45:43.707: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.70:8080/dial?request=hostname&protocol=http&host=10.244.2.123&port=8080&tries=1'] Namespace:pod-network-test-7358 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 21:45:43.707: INFO: >>> kubeConfig: /root/.kube/config I0516 21:45:43.769723 6 log.go:172] (0xc002c17970) (0xc000d4ce60) Create stream I0516 21:45:43.769757 6 log.go:172] (0xc002c17970) (0xc000d4ce60) Stream added, broadcasting: 1 I0516 21:45:43.772274 6 log.go:172] (0xc002c17970) Reply frame received for 1 I0516 21:45:43.772337 6 log.go:172] (0xc002c17970) (0xc000d4d180) Create stream I0516 21:45:43.772365 6 log.go:172] (0xc002c17970) (0xc000d4d180) Stream added, broadcasting: 3 I0516 21:45:43.773792 6 log.go:172] (0xc002c17970) Reply frame received for 3 I0516 21:45:43.773824 6 log.go:172] (0xc002c17970) (0xc0020bd220) Create stream I0516 21:45:43.773840 6 log.go:172] (0xc002c17970) (0xc0020bd220) Stream added, broadcasting: 5 I0516 21:45:43.774860 6 log.go:172] (0xc002c17970) Reply frame received for 5 I0516 21:45:43.836495 6 log.go:172] (0xc002c17970) Data frame received for 3 I0516 21:45:43.836526 6 log.go:172] (0xc000d4d180) (3) Data frame handling I0516 21:45:43.836548 6 log.go:172] (0xc000d4d180) (3) Data frame sent I0516 21:45:43.837353 6 log.go:172] (0xc002c17970) Data frame received for 5 I0516 21:45:43.837396 6 log.go:172] (0xc0020bd220) (5) Data frame handling I0516 21:45:43.837429 6 log.go:172] (0xc002c17970) Data frame received for 3 I0516 21:45:43.837449 6 log.go:172] (0xc000d4d180) (3) Data frame handling I0516 21:45:43.839167 6 log.go:172] (0xc002c17970) Data frame received for 1 I0516 21:45:43.839190 6 log.go:172] (0xc000d4ce60) (1) Data frame handling I0516 21:45:43.839202 6 log.go:172] (0xc000d4ce60) (1) Data frame sent I0516 21:45:43.839214 6 log.go:172] (0xc002c17970) (0xc000d4ce60) Stream removed, broadcasting: 1 I0516 21:45:43.839229 6 log.go:172] (0xc002c17970) Go away received I0516 21:45:43.839399 6 log.go:172] (0xc002c17970) (0xc000d4ce60) Stream removed, broadcasting: 1 I0516 21:45:43.839424 6 log.go:172] (0xc002c17970) (0xc000d4d180) Stream removed, broadcasting: 3 I0516 21:45:43.839431 6 log.go:172] (0xc002c17970) (0xc0020bd220) Stream removed, broadcasting: 5 May 16 21:45:43.839: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:45:43.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7358" for this suite. • [SLOW TEST:24.585 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1647,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:45:43.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-273e4b8f-87f6-4bfb-8e83-6f3ca60c3285 STEP: Creating a pod to test consume secrets May 16 21:45:43.967: INFO: Waiting up to 5m0s for pod "pod-secrets-5121ad04-8d44-4b67-b1ed-10191d51adbd" in namespace "secrets-6643" to be "success or failure" May 16 21:45:43.970: INFO: Pod "pod-secrets-5121ad04-8d44-4b67-b1ed-10191d51adbd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.213257ms May 16 21:45:45.975: INFO: Pod "pod-secrets-5121ad04-8d44-4b67-b1ed-10191d51adbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00766969s May 16 21:45:47.979: INFO: Pod "pod-secrets-5121ad04-8d44-4b67-b1ed-10191d51adbd": Phase="Running", Reason="", readiness=true. Elapsed: 4.01228475s May 16 21:45:50.109: INFO: Pod "pod-secrets-5121ad04-8d44-4b67-b1ed-10191d51adbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.141693712s STEP: Saw pod success May 16 21:45:50.109: INFO: Pod "pod-secrets-5121ad04-8d44-4b67-b1ed-10191d51adbd" satisfied condition "success or failure" May 16 21:45:50.139: INFO: Trying to get logs from node jerma-worker pod pod-secrets-5121ad04-8d44-4b67-b1ed-10191d51adbd container secret-volume-test: STEP: delete the pod May 16 21:45:50.187: INFO: Waiting for pod pod-secrets-5121ad04-8d44-4b67-b1ed-10191d51adbd to disappear May 16 21:45:50.198: INFO: Pod pod-secrets-5121ad04-8d44-4b67-b1ed-10191d51adbd no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:45:50.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6643" for this suite. • [SLOW TEST:6.359 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1677,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:45:50.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:46:21.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1850" for this suite. STEP: Destroying namespace "nsdeletetest-5550" for this suite. May 16 21:46:21.905: INFO: Namespace nsdeletetest-5550 was already deleted STEP: Destroying namespace "nsdeletetest-2298" for this suite. • [SLOW TEST:31.701 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":118,"skipped":1763,"failed":0} [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:46:21.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0516 21:46:31.988964 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 16 21:46:31.989: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:46:31.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3837" for this suite. • [SLOW TEST:10.087 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":119,"skipped":1763,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:46:31.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:46:32.080: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:46:38.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7034" for this suite. • [SLOW TEST:6.110 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":120,"skipped":1781,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:46:38.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:46:38.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-2964" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":121,"skipped":1798,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:46:38.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 21:46:38.859: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 21:46:40.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262398, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262398, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262399, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262398, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 21:46:42.876: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262398, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262398, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262399, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262398, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 21:46:45.901: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:46:45.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5168-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:46:47.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7984" for this suite. STEP: Destroying namespace "webhook-7984-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.920 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":122,"skipped":1823,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:46:47.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 16 21:46:47.303: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af7e8f50-dada-4721-b537-28ef245f11bc" in namespace "downward-api-3172" to be "success or failure" May 16 21:46:47.307: INFO: Pod "downwardapi-volume-af7e8f50-dada-4721-b537-28ef245f11bc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.811862ms May 16 21:46:49.311: INFO: Pod "downwardapi-volume-af7e8f50-dada-4721-b537-28ef245f11bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007955802s May 16 21:46:51.316: INFO: Pod "downwardapi-volume-af7e8f50-dada-4721-b537-28ef245f11bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012918329s STEP: Saw pod success May 16 21:46:51.316: INFO: Pod "downwardapi-volume-af7e8f50-dada-4721-b537-28ef245f11bc" satisfied condition "success or failure" May 16 21:46:51.320: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-af7e8f50-dada-4721-b537-28ef245f11bc container client-container: STEP: delete the pod May 16 21:46:51.364: INFO: Waiting for pod downwardapi-volume-af7e8f50-dada-4721-b537-28ef245f11bc to disappear May 16 21:46:51.379: INFO: Pod downwardapi-volume-af7e8f50-dada-4721-b537-28ef245f11bc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:46:51.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3172" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":1848,"failed":0} SS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:46:51.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-ce06e80e-995a-4121-a8d9-66808f2968e3 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-ce06e80e-995a-4121-a8d9-66808f2968e3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:48:09.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9571" for this suite. • [SLOW TEST:78.490 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":1850,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:48:09.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 16 21:48:18.082: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 21:48:18.094: INFO: Pod pod-with-prestop-exec-hook still exists May 16 21:48:20.094: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 21:48:20.100: INFO: Pod pod-with-prestop-exec-hook still exists May 16 21:48:22.094: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 21:48:22.099: INFO: Pod pod-with-prestop-exec-hook still exists May 16 21:48:24.094: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 21:48:24.099: INFO: Pod pod-with-prestop-exec-hook still exists May 16 21:48:26.094: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 21:48:26.098: INFO: Pod pod-with-prestop-exec-hook still exists May 16 21:48:28.094: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 21:48:28.099: INFO: Pod pod-with-prestop-exec-hook still exists May 16 21:48:30.094: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 21:48:30.099: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:48:30.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9053" for this suite. • [SLOW TEST:20.226 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":1857,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:48:30.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:48:30.267: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:48:31.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4302" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":126,"skipped":1896,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:48:31.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 16 21:48:31.550: INFO: Waiting up to 5m0s for pod "downwardapi-volume-54c2ad68-0fce-4e82-bd8b-181f256e9a4a" in namespace "downward-api-8575" to be "success or failure" May 16 21:48:31.554: INFO: Pod "downwardapi-volume-54c2ad68-0fce-4e82-bd8b-181f256e9a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.454332ms May 16 21:48:33.590: INFO: Pod "downwardapi-volume-54c2ad68-0fce-4e82-bd8b-181f256e9a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039831995s May 16 21:48:35.596: INFO: Pod "downwardapi-volume-54c2ad68-0fce-4e82-bd8b-181f256e9a4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046577476s STEP: Saw pod success May 16 21:48:35.596: INFO: Pod "downwardapi-volume-54c2ad68-0fce-4e82-bd8b-181f256e9a4a" satisfied condition "success or failure" May 16 21:48:35.608: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-54c2ad68-0fce-4e82-bd8b-181f256e9a4a container client-container: STEP: delete the pod May 16 21:48:35.657: INFO: Waiting for pod downwardapi-volume-54c2ad68-0fce-4e82-bd8b-181f256e9a4a to disappear May 16 21:48:35.662: INFO: Pod downwardapi-volume-54c2ad68-0fce-4e82-bd8b-181f256e9a4a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:48:35.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8575" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":1914,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:48:35.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 16 21:48:35.761: INFO: Waiting up to 5m0s for pod "client-containers-0dc259a8-be7f-420c-9ef1-3cd42dcc98e4" in namespace "containers-4961" to be "success or failure" May 16 21:48:35.782: INFO: Pod "client-containers-0dc259a8-be7f-420c-9ef1-3cd42dcc98e4": Phase="Pending", Reason="", readiness=false. Elapsed: 21.061745ms May 16 21:48:37.790: INFO: Pod "client-containers-0dc259a8-be7f-420c-9ef1-3cd42dcc98e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029384084s May 16 21:48:39.794: INFO: Pod "client-containers-0dc259a8-be7f-420c-9ef1-3cd42dcc98e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033264814s STEP: Saw pod success May 16 21:48:39.794: INFO: Pod "client-containers-0dc259a8-be7f-420c-9ef1-3cd42dcc98e4" satisfied condition "success or failure" May 16 21:48:39.796: INFO: Trying to get logs from node jerma-worker2 pod client-containers-0dc259a8-be7f-420c-9ef1-3cd42dcc98e4 container test-container: STEP: delete the pod May 16 21:48:39.828: INFO: Waiting for pod client-containers-0dc259a8-be7f-420c-9ef1-3cd42dcc98e4 to disappear May 16 21:48:39.858: INFO: Pod client-containers-0dc259a8-be7f-420c-9ef1-3cd42dcc98e4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:48:39.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4961" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":1916,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:48:39.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:48:39.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1474" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":129,"skipped":1948,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:48:39.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-c5db5826-3708-493a-b423-1773dd14a4b1 STEP: Creating a pod to test consume configMaps May 16 21:48:40.013: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0c0fd84e-6af3-4d70-9c2a-8087b576ae12" in namespace "projected-1312" to be "success or failure" May 16 21:48:40.028: INFO: Pod "pod-projected-configmaps-0c0fd84e-6af3-4d70-9c2a-8087b576ae12": Phase="Pending", Reason="", readiness=false. Elapsed: 14.587293ms May 16 21:48:42.032: INFO: Pod "pod-projected-configmaps-0c0fd84e-6af3-4d70-9c2a-8087b576ae12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019224301s May 16 21:48:44.036: INFO: Pod "pod-projected-configmaps-0c0fd84e-6af3-4d70-9c2a-8087b576ae12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022775302s STEP: Saw pod success May 16 21:48:44.036: INFO: Pod "pod-projected-configmaps-0c0fd84e-6af3-4d70-9c2a-8087b576ae12" satisfied condition "success or failure" May 16 21:48:44.038: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-0c0fd84e-6af3-4d70-9c2a-8087b576ae12 container projected-configmap-volume-test: STEP: delete the pod May 16 21:48:44.070: INFO: Waiting for pod pod-projected-configmaps-0c0fd84e-6af3-4d70-9c2a-8087b576ae12 to disappear May 16 21:48:44.083: INFO: Pod pod-projected-configmaps-0c0fd84e-6af3-4d70-9c2a-8087b576ae12 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:48:44.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1312" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":1954,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:48:44.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 16 21:48:44.204: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c363e3f-ef7e-49bc-9df0-9f4e8cdf15d5" in namespace "projected-2665" to be "success or failure" May 16 21:48:44.207: INFO: Pod "downwardapi-volume-8c363e3f-ef7e-49bc-9df0-9f4e8cdf15d5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.136437ms May 16 21:48:46.210: INFO: Pod "downwardapi-volume-8c363e3f-ef7e-49bc-9df0-9f4e8cdf15d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006700509s May 16 21:48:48.215: INFO: Pod "downwardapi-volume-8c363e3f-ef7e-49bc-9df0-9f4e8cdf15d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011409446s STEP: Saw pod success May 16 21:48:48.215: INFO: Pod "downwardapi-volume-8c363e3f-ef7e-49bc-9df0-9f4e8cdf15d5" satisfied condition "success or failure" May 16 21:48:48.218: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8c363e3f-ef7e-49bc-9df0-9f4e8cdf15d5 container client-container: STEP: delete the pod May 16 21:48:48.239: INFO: Waiting for pod downwardapi-volume-8c363e3f-ef7e-49bc-9df0-9f4e8cdf15d5 to disappear May 16 21:48:48.273: INFO: Pod downwardapi-volume-8c363e3f-ef7e-49bc-9df0-9f4e8cdf15d5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:48:48.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2665" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":1963,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:48:48.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 21:48:49.456: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 21:48:51.466: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262529, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262529, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262529, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262529, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 21:48:54.531: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 16 21:48:54.554: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:48:54.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-235" for this suite. STEP: Destroying namespace "webhook-235-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.499 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":132,"skipped":1963,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:48:54.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-e512db9a-1383-4128-aa70-e1ce0b118c2b STEP: Creating a pod to test consume secrets May 16 21:48:54.863: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fcb47d0a-0506-4222-a0db-ae520eadf020" in namespace "projected-8065" to be "success or failure" May 16 21:48:54.883: INFO: Pod "pod-projected-secrets-fcb47d0a-0506-4222-a0db-ae520eadf020": Phase="Pending", Reason="", readiness=false. Elapsed: 20.062031ms May 16 21:48:56.887: INFO: Pod "pod-projected-secrets-fcb47d0a-0506-4222-a0db-ae520eadf020": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024290096s May 16 21:48:58.891: INFO: Pod "pod-projected-secrets-fcb47d0a-0506-4222-a0db-ae520eadf020": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028372078s STEP: Saw pod success May 16 21:48:58.891: INFO: Pod "pod-projected-secrets-fcb47d0a-0506-4222-a0db-ae520eadf020" satisfied condition "success or failure" May 16 21:48:58.893: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-fcb47d0a-0506-4222-a0db-ae520eadf020 container secret-volume-test: STEP: delete the pod May 16 21:48:58.931: INFO: Waiting for pod pod-projected-secrets-fcb47d0a-0506-4222-a0db-ae520eadf020 to disappear May 16 21:48:58.946: INFO: Pod pod-projected-secrets-fcb47d0a-0506-4222-a0db-ae520eadf020 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:48:58.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8065" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":1965,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:48:58.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:49:05.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1400" for this suite. STEP: Destroying namespace "nsdeletetest-6383" for this suite. May 16 21:49:05.227: INFO: Namespace nsdeletetest-6383 was already deleted STEP: Destroying namespace "nsdeletetest-504" for this suite. • [SLOW TEST:6.255 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":134,"skipped":2015,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:49:05.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 21:49:05.944: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 21:49:08.137: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262545, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262545, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262546, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262545, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 21:49:11.232: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:49:11.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5782-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:49:12.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4087" for this suite. STEP: Destroying namespace "webhook-4087-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.233 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":135,"skipped":2037,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:49:12.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:49:12.565: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 16 21:49:15.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4410 create -f -' May 16 21:49:18.918: INFO: stderr: "" May 16 21:49:18.918: INFO: stdout: "e2e-test-crd-publish-openapi-1742-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 16 21:49:18.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4410 delete e2e-test-crd-publish-openapi-1742-crds test-cr' May 16 21:49:19.039: INFO: stderr: "" May 16 21:49:19.039: INFO: stdout: "e2e-test-crd-publish-openapi-1742-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 16 21:49:19.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4410 apply -f -' May 16 21:49:19.314: INFO: stderr: "" May 16 21:49:19.314: INFO: stdout: "e2e-test-crd-publish-openapi-1742-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 16 21:49:19.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4410 delete e2e-test-crd-publish-openapi-1742-crds test-cr' May 16 21:49:19.412: INFO: stderr: "" May 16 21:49:19.412: INFO: stdout: "e2e-test-crd-publish-openapi-1742-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 16 21:49:19.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1742-crds' May 16 21:49:19.636: INFO: stderr: "" May 16 21:49:19.636: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1742-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:49:22.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4410" for this suite. • [SLOW TEST:10.071 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":136,"skipped":2076,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:49:22.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8190 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 16 21:49:22.629: INFO: Found 0 stateful pods, waiting for 3 May 16 21:49:32.634: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 16 21:49:32.634: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 16 21:49:32.634: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false May 16 21:49:42.634: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 16 21:49:42.634: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 16 21:49:42.634: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 16 21:49:42.662: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 16 21:49:52.749: INFO: Updating stateful set ss2 May 16 21:49:52.777: INFO: Waiting for Pod statefulset-8190/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 16 21:50:03.214: INFO: Found 2 stateful pods, waiting for 3 May 16 21:50:13.222: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 16 21:50:13.222: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 16 21:50:13.222: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 16 21:50:13.244: INFO: Updating stateful set ss2 May 16 21:50:13.250: INFO: Waiting for Pod statefulset-8190/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 16 21:50:23.291: INFO: Updating stateful set ss2 May 16 21:50:23.303: INFO: Waiting for StatefulSet statefulset-8190/ss2 to complete update May 16 21:50:23.303: INFO: Waiting for Pod statefulset-8190/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 16 21:50:33.311: INFO: Deleting all statefulset in ns statefulset-8190 May 16 21:50:33.314: INFO: Scaling statefulset ss2 to 0 May 16 21:51:03.364: INFO: Waiting for statefulset status.replicas updated to 0 May 16 21:51:03.366: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:51:03.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8190" for this suite. • [SLOW TEST:100.855 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":137,"skipped":2092,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:51:03.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-facb1dcd-33c9-43cf-ae88-e88aa63aeb1c STEP: Creating a pod to test consume configMaps May 16 21:51:03.526: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2ad468cc-6171-45e1-8240-b4704ee4f07b" in namespace "projected-5841" to be "success or failure" May 16 21:51:03.546: INFO: Pod "pod-projected-configmaps-2ad468cc-6171-45e1-8240-b4704ee4f07b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.943163ms May 16 21:51:05.551: INFO: Pod "pod-projected-configmaps-2ad468cc-6171-45e1-8240-b4704ee4f07b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024618791s May 16 21:51:07.554: INFO: Pod "pod-projected-configmaps-2ad468cc-6171-45e1-8240-b4704ee4f07b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028017268s May 16 21:51:09.559: INFO: Pod "pod-projected-configmaps-2ad468cc-6171-45e1-8240-b4704ee4f07b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032361846s STEP: Saw pod success May 16 21:51:09.559: INFO: Pod "pod-projected-configmaps-2ad468cc-6171-45e1-8240-b4704ee4f07b" satisfied condition "success or failure" May 16 21:51:09.562: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-2ad468cc-6171-45e1-8240-b4704ee4f07b container projected-configmap-volume-test: STEP: delete the pod May 16 21:51:09.624: INFO: Waiting for pod pod-projected-configmaps-2ad468cc-6171-45e1-8240-b4704ee4f07b to disappear May 16 21:51:09.630: INFO: Pod pod-projected-configmaps-2ad468cc-6171-45e1-8240-b4704ee4f07b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:51:09.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5841" for this suite. • [SLOW TEST:6.244 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2115,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:51:09.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:51:09.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-364' May 16 21:51:10.041: INFO: stderr: "" May 16 21:51:10.041: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 16 21:51:10.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-364' May 16 21:51:10.324: INFO: stderr: "" May 16 21:51:10.325: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 16 21:51:11.339: INFO: Selector matched 1 pods for map[app:agnhost] May 16 21:51:11.339: INFO: Found 0 / 1 May 16 21:51:12.328: INFO: Selector matched 1 pods for map[app:agnhost] May 16 21:51:12.328: INFO: Found 0 / 1 May 16 21:51:13.329: INFO: Selector matched 1 pods for map[app:agnhost] May 16 21:51:13.329: INFO: Found 0 / 1 May 16 21:51:14.330: INFO: Selector matched 1 pods for map[app:agnhost] May 16 21:51:14.330: INFO: Found 1 / 1 May 16 21:51:14.330: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 16 21:51:14.334: INFO: Selector matched 1 pods for map[app:agnhost] May 16 21:51:14.334: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 16 21:51:14.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-sgsck --namespace=kubectl-364' May 16 21:51:14.458: INFO: stderr: "" May 16 21:51:14.458: INFO: stdout: "Name: agnhost-master-sgsck\nNamespace: kubectl-364\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Sat, 16 May 2020 21:51:10 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.138\nIPs:\n IP: 10.244.2.138\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://624896dff63878f838f22339af37fee02e86d5c6019f77338e9efaf7079369d5\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 16 May 2020 21:51:12 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-w4qng (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-w4qng:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-w4qng\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-364/agnhost-master-sgsck to jerma-worker2\n Normal Pulled 3s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 2s kubelet, jerma-worker2 Started container agnhost-master\n" May 16 21:51:14.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-364' May 16 21:51:14.596: INFO: stderr: "" May 16 21:51:14.596: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-364\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-sgsck\n" May 16 21:51:14.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-364' May 16 21:51:14.717: INFO: stderr: "" May 16 21:51:14.717: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-364\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.102.67.250\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.138:6379\nSession Affinity: None\nEvents: \n" May 16 21:51:14.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 16 21:51:14.881: INFO: stderr: "" May 16 21:51:14.881: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Sat, 16 May 2020 21:51:07 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 16 May 2020 21:47:26 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 16 May 2020 21:47:26 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 16 May 2020 21:47:26 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 16 May 2020 21:47:26 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 62d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 62d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 62d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 62d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 62d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 62d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 62d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 62d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 62d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 16 21:51:14.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-364' May 16 21:51:14.978: INFO: stderr: "" May 16 21:51:14.978: INFO: stdout: "Name: kubectl-364\nLabels: e2e-framework=kubectl\n e2e-run=79c05dbd-8833-4311-8e5a-96d7c6c4a021\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:51:14.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-364" for this suite. • [SLOW TEST:5.346 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":139,"skipped":2131,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:51:14.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-1493 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1493 to expose endpoints map[] May 16 21:51:15.121: INFO: Get endpoints failed (30.271303ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 16 21:51:16.126: INFO: successfully validated that service endpoint-test2 in namespace services-1493 exposes endpoints map[] (1.034493087s elapsed) STEP: Creating pod pod1 in namespace services-1493 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1493 to expose endpoints map[pod1:[80]] May 16 21:51:20.272: INFO: successfully validated that service endpoint-test2 in namespace services-1493 exposes endpoints map[pod1:[80]] (4.139648653s elapsed) STEP: Creating pod pod2 in namespace services-1493 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1493 to expose endpoints map[pod1:[80] pod2:[80]] May 16 21:51:24.632: INFO: successfully validated that service endpoint-test2 in namespace services-1493 exposes endpoints map[pod1:[80] pod2:[80]] (4.321704268s elapsed) STEP: Deleting pod pod1 in namespace services-1493 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1493 to expose endpoints map[pod2:[80]] May 16 21:51:25.707: INFO: successfully validated that service endpoint-test2 in namespace services-1493 exposes endpoints map[pod2:[80]] (1.071579848s elapsed) STEP: Deleting pod pod2 in namespace services-1493 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1493 to expose endpoints map[] May 16 21:51:25.729: INFO: successfully validated that service endpoint-test2 in namespace services-1493 exposes endpoints map[] (15.806572ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:51:25.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1493" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.779 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":140,"skipped":2150,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:51:25.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-5d228e27-b4ec-4d66-a9a6-9b75312ee6be STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:51:32.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6744" for this suite. • [SLOW TEST:6.391 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2162,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:51:32.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:51:43.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6602" for this suite. • [SLOW TEST:11.116 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":142,"skipped":2175,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:51:43.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:51:47.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1421" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2178,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:51:47.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 16 21:51:47.472: INFO: Waiting up to 5m0s for pod "pod-953720df-fbb3-4872-9659-e75a4861291e" in namespace "emptydir-4227" to be "success or failure" May 16 21:51:47.494: INFO: Pod "pod-953720df-fbb3-4872-9659-e75a4861291e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.76159ms May 16 21:51:49.544: INFO: Pod "pod-953720df-fbb3-4872-9659-e75a4861291e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072188762s May 16 21:51:51.548: INFO: Pod "pod-953720df-fbb3-4872-9659-e75a4861291e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076410402s STEP: Saw pod success May 16 21:51:51.548: INFO: Pod "pod-953720df-fbb3-4872-9659-e75a4861291e" satisfied condition "success or failure" May 16 21:51:51.552: INFO: Trying to get logs from node jerma-worker2 pod pod-953720df-fbb3-4872-9659-e75a4861291e container test-container: STEP: delete the pod May 16 21:51:51.588: INFO: Waiting for pod pod-953720df-fbb3-4872-9659-e75a4861291e to disappear May 16 21:51:51.598: INFO: Pod pod-953720df-fbb3-4872-9659-e75a4861291e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:51:51.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4227" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2200,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:51:51.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 16 21:51:51.666: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e03e84d1-67ca-43ed-87ed-cd8068e8db8e" in namespace "downward-api-4441" to be "success or failure" May 16 21:51:51.684: INFO: Pod "downwardapi-volume-e03e84d1-67ca-43ed-87ed-cd8068e8db8e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.620298ms May 16 21:51:53.688: INFO: Pod "downwardapi-volume-e03e84d1-67ca-43ed-87ed-cd8068e8db8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021694758s May 16 21:51:55.693: INFO: Pod "downwardapi-volume-e03e84d1-67ca-43ed-87ed-cd8068e8db8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026522082s STEP: Saw pod success May 16 21:51:55.693: INFO: Pod "downwardapi-volume-e03e84d1-67ca-43ed-87ed-cd8068e8db8e" satisfied condition "success or failure" May 16 21:51:55.697: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-e03e84d1-67ca-43ed-87ed-cd8068e8db8e container client-container: STEP: delete the pod May 16 21:51:55.727: INFO: Waiting for pod downwardapi-volume-e03e84d1-67ca-43ed-87ed-cd8068e8db8e to disappear May 16 21:51:55.742: INFO: Pod downwardapi-volume-e03e84d1-67ca-43ed-87ed-cd8068e8db8e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:51:55.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4441" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2202,"failed":0} ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:51:55.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-f64c8919-4d88-4e22-bafc-3507ae91b83e STEP: Creating secret with name s-test-opt-upd-b9cebe30-6291-46fa-bcf0-bbb33615eea2 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f64c8919-4d88-4e22-bafc-3507ae91b83e STEP: Updating secret s-test-opt-upd-b9cebe30-6291-46fa-bcf0-bbb33615eea2 STEP: Creating secret with name s-test-opt-create-3754624c-3a87-4a22-a5d5-690e66769dda STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:52:03.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6390" for this suite. • [SLOW TEST:8.206 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2202,"failed":0} [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:52:03.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 16 21:52:04.666: INFO: Pod name wrapped-volume-race-3e13d9ff-311f-4a28-b4ae-2114c23f2c0a: Found 0 pods out of 5 May 16 21:52:10.038: INFO: Pod name wrapped-volume-race-3e13d9ff-311f-4a28-b4ae-2114c23f2c0a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3e13d9ff-311f-4a28-b4ae-2114c23f2c0a in namespace emptydir-wrapper-8122, will wait for the garbage collector to delete the pods May 16 21:52:24.130: INFO: Deleting ReplicationController wrapped-volume-race-3e13d9ff-311f-4a28-b4ae-2114c23f2c0a took: 7.19288ms May 16 21:52:24.430: INFO: Terminating ReplicationController wrapped-volume-race-3e13d9ff-311f-4a28-b4ae-2114c23f2c0a pods took: 300.240989ms STEP: Creating RC which spawns configmap-volume pods May 16 21:52:40.379: INFO: Pod name wrapped-volume-race-c55ca28e-4f1c-42eb-bfc7-b642eca2037c: Found 0 pods out of 5 May 16 21:52:45.387: INFO: Pod name wrapped-volume-race-c55ca28e-4f1c-42eb-bfc7-b642eca2037c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c55ca28e-4f1c-42eb-bfc7-b642eca2037c in namespace emptydir-wrapper-8122, will wait for the garbage collector to delete the pods May 16 21:53:01.470: INFO: Deleting ReplicationController wrapped-volume-race-c55ca28e-4f1c-42eb-bfc7-b642eca2037c took: 8.085406ms May 16 21:53:01.870: INFO: Terminating ReplicationController wrapped-volume-race-c55ca28e-4f1c-42eb-bfc7-b642eca2037c pods took: 400.307012ms STEP: Creating RC which spawns configmap-volume pods May 16 21:53:09.724: INFO: Pod name wrapped-volume-race-3074cbff-59ef-40b3-9371-25d872b56c16: Found 0 pods out of 5 May 16 21:53:14.733: INFO: Pod name wrapped-volume-race-3074cbff-59ef-40b3-9371-25d872b56c16: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3074cbff-59ef-40b3-9371-25d872b56c16 in namespace emptydir-wrapper-8122, will wait for the garbage collector to delete the pods May 16 21:53:28.835: INFO: Deleting ReplicationController wrapped-volume-race-3074cbff-59ef-40b3-9371-25d872b56c16 took: 7.380278ms May 16 21:53:29.235: INFO: Terminating ReplicationController wrapped-volume-race-3074cbff-59ef-40b3-9371-25d872b56c16 pods took: 400.315908ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:53:41.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8122" for this suite. • [SLOW TEST:97.181 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":147,"skipped":2202,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:53:41.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-h6t6 STEP: Creating a pod to test atomic-volume-subpath May 16 21:53:41.271: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-h6t6" in namespace "subpath-2784" to be "success or failure" May 16 21:53:41.281: INFO: Pod "pod-subpath-test-configmap-h6t6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.179734ms May 16 21:53:43.286: INFO: Pod "pod-subpath-test-configmap-h6t6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01501707s May 16 21:53:45.291: INFO: Pod "pod-subpath-test-configmap-h6t6": Phase="Running", Reason="", readiness=true. Elapsed: 4.019937657s May 16 21:53:47.296: INFO: Pod "pod-subpath-test-configmap-h6t6": Phase="Running", Reason="", readiness=true. Elapsed: 6.025022651s May 16 21:53:49.300: INFO: Pod "pod-subpath-test-configmap-h6t6": Phase="Running", Reason="", readiness=true. Elapsed: 8.028874399s May 16 21:53:51.305: INFO: Pod "pod-subpath-test-configmap-h6t6": Phase="Running", Reason="", readiness=true. Elapsed: 10.034282653s May 16 21:53:53.309: INFO: Pod "pod-subpath-test-configmap-h6t6": Phase="Running", Reason="", readiness=true. Elapsed: 12.038627241s May 16 21:53:55.315: INFO: Pod "pod-subpath-test-configmap-h6t6": Phase="Running", Reason="", readiness=true. Elapsed: 14.044099236s May 16 21:53:57.319: INFO: Pod "pod-subpath-test-configmap-h6t6": Phase="Running", Reason="", readiness=true. Elapsed: 16.04831369s May 16 21:53:59.323: INFO: Pod "pod-subpath-test-configmap-h6t6": Phase="Running", Reason="", readiness=true. Elapsed: 18.052105775s May 16 21:54:01.327: INFO: Pod "pod-subpath-test-configmap-h6t6": Phase="Running", Reason="", readiness=true. Elapsed: 20.056569191s May 16 21:54:03.332: INFO: Pod "pod-subpath-test-configmap-h6t6": Phase="Running", Reason="", readiness=true. Elapsed: 22.061010121s May 16 21:54:05.335: INFO: Pod "pod-subpath-test-configmap-h6t6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.064758561s STEP: Saw pod success May 16 21:54:05.335: INFO: Pod "pod-subpath-test-configmap-h6t6" satisfied condition "success or failure" May 16 21:54:05.338: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-h6t6 container test-container-subpath-configmap-h6t6: STEP: delete the pod May 16 21:54:05.366: INFO: Waiting for pod pod-subpath-test-configmap-h6t6 to disappear May 16 21:54:05.371: INFO: Pod pod-subpath-test-configmap-h6t6 no longer exists STEP: Deleting pod pod-subpath-test-configmap-h6t6 May 16 21:54:05.371: INFO: Deleting pod "pod-subpath-test-configmap-h6t6" in namespace "subpath-2784" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:54:05.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2784" for this suite. • [SLOW TEST:24.240 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":148,"skipped":2213,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:54:05.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 16 21:54:05.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5728' May 16 21:54:05.558: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 16 21:54:05.558: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 16 21:54:05.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-5728' May 16 21:54:05.666: INFO: stderr: "" May 16 21:54:05.666: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:54:05.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5728" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":149,"skipped":2246,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:54:05.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:54:05.847: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 16 21:54:05.858: INFO: Number of nodes with available pods: 0 May 16 21:54:05.858: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 16 21:54:05.971: INFO: Number of nodes with available pods: 0 May 16 21:54:05.971: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:54:06.975: INFO: Number of nodes with available pods: 0 May 16 21:54:06.975: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:54:08.097: INFO: Number of nodes with available pods: 0 May 16 21:54:08.097: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:54:08.975: INFO: Number of nodes with available pods: 1 May 16 21:54:08.975: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 16 21:54:09.002: INFO: Number of nodes with available pods: 1 May 16 21:54:09.002: INFO: Number of running nodes: 0, number of available pods: 1 May 16 21:54:10.006: INFO: Number of nodes with available pods: 0 May 16 21:54:10.006: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 16 21:54:10.021: INFO: Number of nodes with available pods: 0 May 16 21:54:10.021: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:54:11.026: INFO: Number of nodes with available pods: 0 May 16 21:54:11.026: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:54:12.026: INFO: Number of nodes with available pods: 0 May 16 21:54:12.026: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:54:13.026: INFO: Number of nodes with available pods: 0 May 16 21:54:13.026: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:54:14.026: INFO: Number of nodes with available pods: 0 May 16 21:54:14.026: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:54:15.026: INFO: Number of nodes with available pods: 0 May 16 21:54:15.026: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:54:16.026: INFO: Number of nodes with available pods: 0 May 16 21:54:16.026: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:54:17.026: INFO: Number of nodes with available pods: 0 May 16 21:54:17.026: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:54:18.025: INFO: Number of nodes with available pods: 0 May 16 21:54:18.025: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:54:19.026: INFO: Number of nodes with available pods: 0 May 16 21:54:19.026: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:54:20.026: INFO: Number of nodes with available pods: 0 May 16 21:54:20.026: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:54:21.025: INFO: Number of nodes with available pods: 0 May 16 21:54:21.025: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:54:22.030: INFO: Number of nodes with available pods: 0 May 16 21:54:22.030: INFO: Node jerma-worker2 is running more than one daemon pod May 16 21:54:23.025: INFO: Number of nodes with available pods: 1 May 16 21:54:23.025: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2419, will wait for the garbage collector to delete the pods May 16 21:54:23.088: INFO: Deleting DaemonSet.extensions daemon-set took: 5.817157ms May 16 21:54:23.388: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.232796ms May 16 21:54:29.516: INFO: Number of nodes with available pods: 0 May 16 21:54:29.516: INFO: Number of running nodes: 0, number of available pods: 0 May 16 21:54:29.518: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2419/daemonsets","resourceVersion":"16744387"},"items":null} May 16 21:54:29.521: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2419/pods","resourceVersion":"16744387"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:54:29.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2419" for this suite. • [SLOW TEST:23.865 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":150,"skipped":2302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:54:29.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 16 21:54:29.632: INFO: Waiting up to 5m0s for pod "var-expansion-80a43710-00e9-4c50-827d-be87a184c093" in namespace "var-expansion-1484" to be "success or failure" May 16 21:54:29.652: INFO: Pod "var-expansion-80a43710-00e9-4c50-827d-be87a184c093": Phase="Pending", Reason="", readiness=false. Elapsed: 19.699431ms May 16 21:54:31.656: INFO: Pod "var-expansion-80a43710-00e9-4c50-827d-be87a184c093": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024150154s May 16 21:54:33.662: INFO: Pod "var-expansion-80a43710-00e9-4c50-827d-be87a184c093": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029388944s STEP: Saw pod success May 16 21:54:33.662: INFO: Pod "var-expansion-80a43710-00e9-4c50-827d-be87a184c093" satisfied condition "success or failure" May 16 21:54:33.665: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-80a43710-00e9-4c50-827d-be87a184c093 container dapi-container: STEP: delete the pod May 16 21:54:33.693: INFO: Waiting for pod var-expansion-80a43710-00e9-4c50-827d-be87a184c093 to disappear May 16 21:54:33.711: INFO: Pod var-expansion-80a43710-00e9-4c50-827d-be87a184c093 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:54:33.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1484" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2354,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:54:33.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 16 21:54:33.813: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b0c3aaa-b1b8-42e2-9b21-3005974263e5" in namespace "projected-5440" to be "success or failure" May 16 21:54:33.842: INFO: Pod "downwardapi-volume-1b0c3aaa-b1b8-42e2-9b21-3005974263e5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.304763ms May 16 21:54:35.846: INFO: Pod "downwardapi-volume-1b0c3aaa-b1b8-42e2-9b21-3005974263e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03286847s May 16 21:54:37.850: INFO: Pod "downwardapi-volume-1b0c3aaa-b1b8-42e2-9b21-3005974263e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036350563s STEP: Saw pod success May 16 21:54:37.850: INFO: Pod "downwardapi-volume-1b0c3aaa-b1b8-42e2-9b21-3005974263e5" satisfied condition "success or failure" May 16 21:54:37.851: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1b0c3aaa-b1b8-42e2-9b21-3005974263e5 container client-container: STEP: delete the pod May 16 21:54:37.866: INFO: Waiting for pod downwardapi-volume-1b0c3aaa-b1b8-42e2-9b21-3005974263e5 to disappear May 16 21:54:37.871: INFO: Pod downwardapi-volume-1b0c3aaa-b1b8-42e2-9b21-3005974263e5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:54:37.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5440" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2385,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:54:37.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7374 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-7374 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7374 May 16 21:54:37.984: INFO: Found 0 stateful pods, waiting for 1 May 16 21:54:47.989: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 16 21:54:47.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7374 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 21:54:48.253: INFO: stderr: "I0516 21:54:48.131198 2171 log.go:172] (0xc000a77a20) (0xc000ad06e0) Create stream\nI0516 21:54:48.131250 2171 log.go:172] (0xc000a77a20) (0xc000ad06e0) Stream added, broadcasting: 1\nI0516 21:54:48.133529 2171 log.go:172] (0xc000a77a20) Reply frame received for 1\nI0516 21:54:48.133564 2171 log.go:172] (0xc000a77a20) (0xc000a6e8c0) Create stream\nI0516 21:54:48.133585 2171 log.go:172] (0xc000a77a20) (0xc000a6e8c0) Stream added, broadcasting: 3\nI0516 21:54:48.134639 2171 log.go:172] (0xc000a77a20) Reply frame received for 3\nI0516 21:54:48.134675 2171 log.go:172] (0xc000a77a20) (0xc000a54d20) Create stream\nI0516 21:54:48.134694 2171 log.go:172] (0xc000a77a20) (0xc000a54d20) Stream added, broadcasting: 5\nI0516 21:54:48.135853 2171 log.go:172] (0xc000a77a20) Reply frame received for 5\nI0516 21:54:48.203859 2171 log.go:172] (0xc000a77a20) Data frame received for 5\nI0516 21:54:48.203890 2171 log.go:172] (0xc000a54d20) (5) Data frame handling\nI0516 21:54:48.203910 2171 log.go:172] (0xc000a54d20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 21:54:48.242719 2171 log.go:172] (0xc000a77a20) Data frame received for 3\nI0516 21:54:48.242850 2171 log.go:172] (0xc000a6e8c0) (3) Data frame handling\nI0516 21:54:48.242891 2171 log.go:172] (0xc000a6e8c0) (3) Data frame sent\nI0516 21:54:48.242913 2171 log.go:172] (0xc000a77a20) Data frame received for 3\nI0516 21:54:48.242954 2171 log.go:172] (0xc000a6e8c0) (3) Data frame handling\nI0516 21:54:48.242987 2171 log.go:172] (0xc000a77a20) Data frame received for 5\nI0516 21:54:48.243009 2171 log.go:172] (0xc000a54d20) (5) Data frame handling\nI0516 21:54:48.245750 2171 log.go:172] (0xc000a77a20) Data frame received for 1\nI0516 21:54:48.245780 2171 log.go:172] (0xc000ad06e0) (1) Data frame handling\nI0516 21:54:48.245794 2171 log.go:172] (0xc000ad06e0) (1) Data frame sent\nI0516 21:54:48.245810 2171 log.go:172] (0xc000a77a20) (0xc000ad06e0) Stream removed, broadcasting: 1\nI0516 21:54:48.245925 2171 log.go:172] (0xc000a77a20) Go away received\nI0516 21:54:48.246274 2171 log.go:172] (0xc000a77a20) (0xc000ad06e0) Stream removed, broadcasting: 1\nI0516 21:54:48.246307 2171 log.go:172] (0xc000a77a20) (0xc000a6e8c0) Stream removed, broadcasting: 3\nI0516 21:54:48.246321 2171 log.go:172] (0xc000a77a20) (0xc000a54d20) Stream removed, broadcasting: 5\n" May 16 21:54:48.253: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 21:54:48.253: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 21:54:48.257: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 16 21:54:58.261: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 16 21:54:58.261: INFO: Waiting for statefulset status.replicas updated to 0 May 16 21:54:58.284: INFO: POD NODE PHASE GRACE CONDITIONS May 16 21:54:58.284: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:38 +0000 UTC }] May 16 21:54:58.284: INFO: May 16 21:54:58.284: INFO: StatefulSet ss has not reached scale 3, at 1 May 16 21:54:59.350: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.985713499s May 16 21:55:00.355: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.919154889s May 16 21:55:01.360: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.914049632s May 16 21:55:02.365: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.909255116s May 16 21:55:03.370: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.903748939s May 16 21:55:04.375: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.89919001s May 16 21:55:05.381: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.893769585s May 16 21:55:06.387: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.888340754s May 16 21:55:07.391: INFO: Verifying statefulset ss doesn't scale past 3 for another 882.621204ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7374 May 16 21:55:08.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7374 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 21:55:08.612: INFO: stderr: "I0516 21:55:08.530723 2191 log.go:172] (0xc000556dc0) (0xc0005e1d60) Create stream\nI0516 21:55:08.530785 2191 log.go:172] (0xc000556dc0) (0xc0005e1d60) Stream added, broadcasting: 1\nI0516 21:55:08.534067 2191 log.go:172] (0xc000556dc0) Reply frame received for 1\nI0516 21:55:08.534129 2191 log.go:172] (0xc000556dc0) (0xc0005e1e00) Create stream\nI0516 21:55:08.534151 2191 log.go:172] (0xc000556dc0) (0xc0005e1e00) Stream added, broadcasting: 3\nI0516 21:55:08.535211 2191 log.go:172] (0xc000556dc0) Reply frame received for 3\nI0516 21:55:08.535256 2191 log.go:172] (0xc000556dc0) (0xc0004b6640) Create stream\nI0516 21:55:08.535273 2191 log.go:172] (0xc000556dc0) (0xc0004b6640) Stream added, broadcasting: 5\nI0516 21:55:08.536231 2191 log.go:172] (0xc000556dc0) Reply frame received for 5\nI0516 21:55:08.600055 2191 log.go:172] (0xc000556dc0) Data frame received for 5\nI0516 21:55:08.600097 2191 log.go:172] (0xc0004b6640) (5) Data frame handling\nI0516 21:55:08.600117 2191 log.go:172] (0xc0004b6640) (5) Data frame sent\nI0516 21:55:08.600142 2191 log.go:172] (0xc000556dc0) Data frame received for 5\nI0516 21:55:08.600155 2191 log.go:172] (0xc0004b6640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0516 21:55:08.600201 2191 log.go:172] (0xc000556dc0) Data frame received for 3\nI0516 21:55:08.600241 2191 log.go:172] (0xc0005e1e00) (3) Data frame handling\nI0516 21:55:08.600262 2191 log.go:172] (0xc0005e1e00) (3) Data frame sent\nI0516 21:55:08.600284 2191 log.go:172] (0xc000556dc0) Data frame received for 3\nI0516 21:55:08.600300 2191 log.go:172] (0xc0005e1e00) (3) Data frame handling\nI0516 21:55:08.606636 2191 log.go:172] (0xc000556dc0) Data frame received for 1\nI0516 21:55:08.606676 2191 log.go:172] (0xc0005e1d60) (1) Data frame handling\nI0516 21:55:08.606698 2191 log.go:172] (0xc0005e1d60) (1) Data frame sent\nI0516 21:55:08.606720 2191 log.go:172] (0xc000556dc0) (0xc0005e1d60) Stream removed, broadcasting: 1\nI0516 21:55:08.607196 2191 log.go:172] (0xc000556dc0) (0xc0005e1d60) Stream removed, broadcasting: 1\nI0516 21:55:08.607219 2191 log.go:172] (0xc000556dc0) (0xc0005e1e00) Stream removed, broadcasting: 3\nI0516 21:55:08.607233 2191 log.go:172] (0xc000556dc0) (0xc0004b6640) Stream removed, broadcasting: 5\n" May 16 21:55:08.613: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 21:55:08.613: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 21:55:08.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 21:55:08.822: INFO: stderr: "I0516 21:55:08.747982 2212 log.go:172] (0xc00094c0b0) (0xc00050d4a0) Create stream\nI0516 21:55:08.748037 2212 log.go:172] (0xc00094c0b0) (0xc00050d4a0) Stream added, broadcasting: 1\nI0516 21:55:08.750754 2212 log.go:172] (0xc00094c0b0) Reply frame received for 1\nI0516 21:55:08.750802 2212 log.go:172] (0xc00094c0b0) (0xc000926000) Create stream\nI0516 21:55:08.750819 2212 log.go:172] (0xc00094c0b0) (0xc000926000) Stream added, broadcasting: 3\nI0516 21:55:08.751601 2212 log.go:172] (0xc00094c0b0) Reply frame received for 3\nI0516 21:55:08.751634 2212 log.go:172] (0xc00094c0b0) (0xc0009e6000) Create stream\nI0516 21:55:08.751646 2212 log.go:172] (0xc00094c0b0) (0xc0009e6000) Stream added, broadcasting: 5\nI0516 21:55:08.752500 2212 log.go:172] (0xc00094c0b0) Reply frame received for 5\nI0516 21:55:08.813612 2212 log.go:172] (0xc00094c0b0) Data frame received for 3\nI0516 21:55:08.813650 2212 log.go:172] (0xc000926000) (3) Data frame handling\nI0516 21:55:08.813664 2212 log.go:172] (0xc000926000) (3) Data frame sent\nI0516 21:55:08.813675 2212 log.go:172] (0xc00094c0b0) Data frame received for 3\nI0516 21:55:08.813716 2212 log.go:172] (0xc00094c0b0) Data frame received for 5\nI0516 21:55:08.813782 2212 log.go:172] (0xc0009e6000) (5) Data frame handling\nI0516 21:55:08.813803 2212 log.go:172] (0xc0009e6000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0516 21:55:08.813838 2212 log.go:172] (0xc000926000) (3) Data frame handling\nI0516 21:55:08.813918 2212 log.go:172] (0xc00094c0b0) Data frame received for 5\nI0516 21:55:08.813946 2212 log.go:172] (0xc0009e6000) (5) Data frame handling\nI0516 21:55:08.816093 2212 log.go:172] (0xc00094c0b0) Data frame received for 1\nI0516 21:55:08.816110 2212 log.go:172] (0xc00050d4a0) (1) Data frame handling\nI0516 21:55:08.816119 2212 log.go:172] (0xc00050d4a0) (1) Data frame sent\nI0516 21:55:08.816133 2212 log.go:172] (0xc00094c0b0) (0xc00050d4a0) Stream removed, broadcasting: 1\nI0516 21:55:08.816145 2212 log.go:172] (0xc00094c0b0) Go away received\nI0516 21:55:08.816579 2212 log.go:172] (0xc00094c0b0) (0xc00050d4a0) Stream removed, broadcasting: 1\nI0516 21:55:08.816602 2212 log.go:172] (0xc00094c0b0) (0xc000926000) Stream removed, broadcasting: 3\nI0516 21:55:08.816615 2212 log.go:172] (0xc00094c0b0) (0xc0009e6000) Stream removed, broadcasting: 5\n" May 16 21:55:08.822: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 21:55:08.822: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 21:55:08.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7374 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 21:55:09.035: INFO: stderr: "I0516 21:55:08.947701 2232 log.go:172] (0xc0009a2000) (0xc000b00000) Create stream\nI0516 21:55:08.947753 2232 log.go:172] (0xc0009a2000) (0xc000b00000) Stream added, broadcasting: 1\nI0516 21:55:08.950636 2232 log.go:172] (0xc0009a2000) Reply frame received for 1\nI0516 21:55:08.950666 2232 log.go:172] (0xc0009a2000) (0xc00070dc20) Create stream\nI0516 21:55:08.950674 2232 log.go:172] (0xc0009a2000) (0xc00070dc20) Stream added, broadcasting: 3\nI0516 21:55:08.951511 2232 log.go:172] (0xc0009a2000) Reply frame received for 3\nI0516 21:55:08.951540 2232 log.go:172] (0xc0009a2000) (0xc000640000) Create stream\nI0516 21:55:08.951548 2232 log.go:172] (0xc0009a2000) (0xc000640000) Stream added, broadcasting: 5\nI0516 21:55:08.952430 2232 log.go:172] (0xc0009a2000) Reply frame received for 5\nI0516 21:55:09.027784 2232 log.go:172] (0xc0009a2000) Data frame received for 3\nI0516 21:55:09.027816 2232 log.go:172] (0xc00070dc20) (3) Data frame handling\nI0516 21:55:09.027828 2232 log.go:172] (0xc00070dc20) (3) Data frame sent\nI0516 21:55:09.027840 2232 log.go:172] (0xc0009a2000) Data frame received for 3\nI0516 21:55:09.027846 2232 log.go:172] (0xc00070dc20) (3) Data frame handling\nI0516 21:55:09.027854 2232 log.go:172] (0xc0009a2000) Data frame received for 5\nI0516 21:55:09.027859 2232 log.go:172] (0xc000640000) (5) Data frame handling\nI0516 21:55:09.027866 2232 log.go:172] (0xc000640000) (5) Data frame sent\nI0516 21:55:09.027872 2232 log.go:172] (0xc0009a2000) Data frame received for 5\nI0516 21:55:09.027877 2232 log.go:172] (0xc000640000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0516 21:55:09.029627 2232 log.go:172] (0xc0009a2000) Data frame received for 1\nI0516 21:55:09.029648 2232 log.go:172] (0xc000b00000) (1) Data frame handling\nI0516 21:55:09.029669 2232 log.go:172] (0xc000b00000) (1) Data frame sent\nI0516 21:55:09.029987 2232 log.go:172] (0xc0009a2000) (0xc000b00000) Stream removed, broadcasting: 1\nI0516 21:55:09.030087 2232 log.go:172] (0xc0009a2000) Go away received\nI0516 21:55:09.030367 2232 log.go:172] (0xc0009a2000) (0xc000b00000) Stream removed, broadcasting: 1\nI0516 21:55:09.030390 2232 log.go:172] (0xc0009a2000) (0xc00070dc20) Stream removed, broadcasting: 3\nI0516 21:55:09.030401 2232 log.go:172] (0xc0009a2000) (0xc000640000) Stream removed, broadcasting: 5\n" May 16 21:55:09.035: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 21:55:09.035: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 21:55:09.039: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 16 21:55:19.045: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 16 21:55:19.045: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 16 21:55:19.045: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 16 21:55:19.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7374 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 21:55:19.273: INFO: stderr: "I0516 21:55:19.176959 2251 log.go:172] (0xc000916b00) (0xc00074e0a0) Create stream\nI0516 21:55:19.177010 2251 log.go:172] (0xc000916b00) (0xc00074e0a0) Stream added, broadcasting: 1\nI0516 21:55:19.178913 2251 log.go:172] (0xc000916b00) Reply frame received for 1\nI0516 21:55:19.178969 2251 log.go:172] (0xc000916b00) (0xc000890000) Create stream\nI0516 21:55:19.178994 2251 log.go:172] (0xc000916b00) (0xc000890000) Stream added, broadcasting: 3\nI0516 21:55:19.179939 2251 log.go:172] (0xc000916b00) Reply frame received for 3\nI0516 21:55:19.179965 2251 log.go:172] (0xc000916b00) (0xc00074e140) Create stream\nI0516 21:55:19.179974 2251 log.go:172] (0xc000916b00) (0xc00074e140) Stream added, broadcasting: 5\nI0516 21:55:19.180701 2251 log.go:172] (0xc000916b00) Reply frame received for 5\nI0516 21:55:19.267226 2251 log.go:172] (0xc000916b00) Data frame received for 5\nI0516 21:55:19.267272 2251 log.go:172] (0xc00074e140) (5) Data frame handling\nI0516 21:55:19.267292 2251 log.go:172] (0xc00074e140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 21:55:19.267334 2251 log.go:172] (0xc000916b00) Data frame received for 5\nI0516 21:55:19.267348 2251 log.go:172] (0xc00074e140) (5) Data frame handling\nI0516 21:55:19.267365 2251 log.go:172] (0xc000916b00) Data frame received for 3\nI0516 21:55:19.267376 2251 log.go:172] (0xc000890000) (3) Data frame handling\nI0516 21:55:19.267385 2251 log.go:172] (0xc000890000) (3) Data frame sent\nI0516 21:55:19.267394 2251 log.go:172] (0xc000916b00) Data frame received for 3\nI0516 21:55:19.267402 2251 log.go:172] (0xc000890000) (3) Data frame handling\nI0516 21:55:19.268780 2251 log.go:172] (0xc000916b00) Data frame received for 1\nI0516 21:55:19.268805 2251 log.go:172] (0xc00074e0a0) (1) Data frame handling\nI0516 21:55:19.268830 2251 log.go:172] (0xc00074e0a0) (1) Data frame sent\nI0516 21:55:19.268849 2251 log.go:172] (0xc000916b00) (0xc00074e0a0) Stream removed, broadcasting: 1\nI0516 21:55:19.268872 2251 log.go:172] (0xc000916b00) Go away received\nI0516 21:55:19.269523 2251 log.go:172] (0xc000916b00) (0xc00074e0a0) Stream removed, broadcasting: 1\nI0516 21:55:19.269549 2251 log.go:172] (0xc000916b00) (0xc000890000) Stream removed, broadcasting: 3\nI0516 21:55:19.269566 2251 log.go:172] (0xc000916b00) (0xc00074e140) Stream removed, broadcasting: 5\n" May 16 21:55:19.273: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 21:55:19.273: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 21:55:19.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7374 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 21:55:19.498: INFO: stderr: "I0516 21:55:19.410505 2267 log.go:172] (0xc000936000) (0xc0007e2000) Create stream\nI0516 21:55:19.410681 2267 log.go:172] (0xc000936000) (0xc0007e2000) Stream added, broadcasting: 1\nI0516 21:55:19.413892 2267 log.go:172] (0xc000936000) Reply frame received for 1\nI0516 21:55:19.413938 2267 log.go:172] (0xc000936000) (0xc000842000) Create stream\nI0516 21:55:19.414014 2267 log.go:172] (0xc000936000) (0xc000842000) Stream added, broadcasting: 3\nI0516 21:55:19.414884 2267 log.go:172] (0xc000936000) Reply frame received for 3\nI0516 21:55:19.414912 2267 log.go:172] (0xc000936000) (0xc0007e20a0) Create stream\nI0516 21:55:19.414927 2267 log.go:172] (0xc000936000) (0xc0007e20a0) Stream added, broadcasting: 5\nI0516 21:55:19.415913 2267 log.go:172] (0xc000936000) Reply frame received for 5\nI0516 21:55:19.461442 2267 log.go:172] (0xc000936000) Data frame received for 5\nI0516 21:55:19.461480 2267 log.go:172] (0xc0007e20a0) (5) Data frame handling\nI0516 21:55:19.461518 2267 log.go:172] (0xc0007e20a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 21:55:19.492056 2267 log.go:172] (0xc000936000) Data frame received for 3\nI0516 21:55:19.492083 2267 log.go:172] (0xc000842000) (3) Data frame handling\nI0516 21:55:19.492093 2267 log.go:172] (0xc000842000) (3) Data frame sent\nI0516 21:55:19.492101 2267 log.go:172] (0xc000936000) Data frame received for 3\nI0516 21:55:19.492108 2267 log.go:172] (0xc000842000) (3) Data frame handling\nI0516 21:55:19.492373 2267 log.go:172] (0xc000936000) Data frame received for 5\nI0516 21:55:19.492404 2267 log.go:172] (0xc0007e20a0) (5) Data frame handling\nI0516 21:55:19.494415 2267 log.go:172] (0xc000936000) Data frame received for 1\nI0516 21:55:19.494427 2267 log.go:172] (0xc0007e2000) (1) Data frame handling\nI0516 21:55:19.494433 2267 log.go:172] (0xc0007e2000) (1) Data frame sent\nI0516 21:55:19.494440 2267 log.go:172] (0xc000936000) (0xc0007e2000) Stream removed, broadcasting: 1\nI0516 21:55:19.494681 2267 log.go:172] (0xc000936000) (0xc0007e2000) Stream removed, broadcasting: 1\nI0516 21:55:19.494697 2267 log.go:172] (0xc000936000) (0xc000842000) Stream removed, broadcasting: 3\nI0516 21:55:19.494798 2267 log.go:172] (0xc000936000) (0xc0007e20a0) Stream removed, broadcasting: 5\n" May 16 21:55:19.499: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 21:55:19.499: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 21:55:19.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7374 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 21:55:19.746: INFO: stderr: "I0516 21:55:19.637323 2286 log.go:172] (0xc0000f51e0) (0xc000671a40) Create stream\nI0516 21:55:19.637374 2286 log.go:172] (0xc0000f51e0) (0xc000671a40) Stream added, broadcasting: 1\nI0516 21:55:19.639785 2286 log.go:172] (0xc0000f51e0) Reply frame received for 1\nI0516 21:55:19.639848 2286 log.go:172] (0xc0000f51e0) (0xc000944000) Create stream\nI0516 21:55:19.639869 2286 log.go:172] (0xc0000f51e0) (0xc000944000) Stream added, broadcasting: 3\nI0516 21:55:19.640892 2286 log.go:172] (0xc0000f51e0) Reply frame received for 3\nI0516 21:55:19.640938 2286 log.go:172] (0xc0000f51e0) (0xc000212000) Create stream\nI0516 21:55:19.640951 2286 log.go:172] (0xc0000f51e0) (0xc000212000) Stream added, broadcasting: 5\nI0516 21:55:19.642004 2286 log.go:172] (0xc0000f51e0) Reply frame received for 5\nI0516 21:55:19.707942 2286 log.go:172] (0xc0000f51e0) Data frame received for 5\nI0516 21:55:19.707967 2286 log.go:172] (0xc000212000) (5) Data frame handling\nI0516 21:55:19.707991 2286 log.go:172] (0xc000212000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 21:55:19.738025 2286 log.go:172] (0xc0000f51e0) Data frame received for 3\nI0516 21:55:19.738058 2286 log.go:172] (0xc000944000) (3) Data frame handling\nI0516 21:55:19.738077 2286 log.go:172] (0xc000944000) (3) Data frame sent\nI0516 21:55:19.738462 2286 log.go:172] (0xc0000f51e0) Data frame received for 3\nI0516 21:55:19.738485 2286 log.go:172] (0xc000944000) (3) Data frame handling\nI0516 21:55:19.738517 2286 log.go:172] (0xc0000f51e0) Data frame received for 5\nI0516 21:55:19.738552 2286 log.go:172] (0xc000212000) (5) Data frame handling\nI0516 21:55:19.740347 2286 log.go:172] (0xc0000f51e0) Data frame received for 1\nI0516 21:55:19.740366 2286 log.go:172] (0xc000671a40) (1) Data frame handling\nI0516 21:55:19.740378 2286 log.go:172] (0xc000671a40) (1) Data frame sent\nI0516 21:55:19.740507 2286 log.go:172] (0xc0000f51e0) (0xc000671a40) Stream removed, broadcasting: 1\nI0516 21:55:19.740596 2286 log.go:172] (0xc0000f51e0) Go away received\nI0516 21:55:19.740974 2286 log.go:172] (0xc0000f51e0) (0xc000671a40) Stream removed, broadcasting: 1\nI0516 21:55:19.740995 2286 log.go:172] (0xc0000f51e0) (0xc000944000) Stream removed, broadcasting: 3\nI0516 21:55:19.741005 2286 log.go:172] (0xc0000f51e0) (0xc000212000) Stream removed, broadcasting: 5\n" May 16 21:55:19.746: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 21:55:19.746: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 21:55:19.746: INFO: Waiting for statefulset status.replicas updated to 0 May 16 21:55:19.750: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 16 21:55:29.758: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 16 21:55:29.758: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 16 21:55:29.758: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 16 21:55:29.770: INFO: POD NODE PHASE GRACE CONDITIONS May 16 21:55:29.770: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:38 +0000 UTC }] May 16 21:55:29.770: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:29.770: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:29.770: INFO: May 16 21:55:29.770: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 21:55:30.776: INFO: POD NODE PHASE GRACE CONDITIONS May 16 21:55:30.776: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:38 +0000 UTC }] May 16 21:55:30.776: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:30.776: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:30.776: INFO: May 16 21:55:30.776: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 21:55:31.811: INFO: POD NODE PHASE GRACE CONDITIONS May 16 21:55:31.811: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:38 +0000 UTC }] May 16 21:55:31.811: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:31.811: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:31.811: INFO: May 16 21:55:31.811: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 21:55:32.842: INFO: POD NODE PHASE GRACE CONDITIONS May 16 21:55:32.842: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:32.842: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:32.842: INFO: May 16 21:55:32.842: INFO: StatefulSet ss has not reached scale 0, at 2 May 16 21:55:33.846: INFO: POD NODE PHASE GRACE CONDITIONS May 16 21:55:33.846: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:33.847: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:33.847: INFO: May 16 21:55:33.847: INFO: StatefulSet ss has not reached scale 0, at 2 May 16 21:55:34.851: INFO: POD NODE PHASE GRACE CONDITIONS May 16 21:55:34.851: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:34.851: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:34.851: INFO: May 16 21:55:34.851: INFO: StatefulSet ss has not reached scale 0, at 2 May 16 21:55:35.856: INFO: POD NODE PHASE GRACE CONDITIONS May 16 21:55:35.856: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:35.856: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:35.857: INFO: May 16 21:55:35.857: INFO: StatefulSet ss has not reached scale 0, at 2 May 16 21:55:36.863: INFO: POD NODE PHASE GRACE CONDITIONS May 16 21:55:36.863: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:36.863: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:36.863: INFO: May 16 21:55:36.863: INFO: StatefulSet ss has not reached scale 0, at 2 May 16 21:55:37.867: INFO: POD NODE PHASE GRACE CONDITIONS May 16 21:55:37.867: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:37.868: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:37.868: INFO: May 16 21:55:37.868: INFO: StatefulSet ss has not reached scale 0, at 2 May 16 21:55:38.872: INFO: POD NODE PHASE GRACE CONDITIONS May 16 21:55:38.872: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:38.872: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 21:54:58 +0000 UTC }] May 16 21:55:38.872: INFO: May 16 21:55:38.872: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7374 May 16 21:55:39.877: INFO: Scaling statefulset ss to 0 May 16 21:55:39.887: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 16 21:55:39.890: INFO: Deleting all statefulset in ns statefulset-7374 May 16 21:55:39.893: INFO: Scaling statefulset ss to 0 May 16 21:55:39.902: INFO: Waiting for statefulset status.replicas updated to 0 May 16 21:55:39.904: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:55:39.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7374" for this suite. • [SLOW TEST:62.052 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":153,"skipped":2387,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:55:39.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-ef1ca5f7-ae5d-4833-bcb1-f3817ce3a87c STEP: Creating a pod to test consume secrets May 16 21:55:40.018: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d87e7105-b758-4c42-a9d8-bf8c647ae666" in namespace "projected-9290" to be "success or failure" May 16 21:55:40.028: INFO: Pod "pod-projected-secrets-d87e7105-b758-4c42-a9d8-bf8c647ae666": Phase="Pending", Reason="", readiness=false. Elapsed: 9.901129ms May 16 21:55:42.033: INFO: Pod "pod-projected-secrets-d87e7105-b758-4c42-a9d8-bf8c647ae666": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014948474s May 16 21:55:44.038: INFO: Pod "pod-projected-secrets-d87e7105-b758-4c42-a9d8-bf8c647ae666": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019574814s STEP: Saw pod success May 16 21:55:44.038: INFO: Pod "pod-projected-secrets-d87e7105-b758-4c42-a9d8-bf8c647ae666" satisfied condition "success or failure" May 16 21:55:44.042: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-d87e7105-b758-4c42-a9d8-bf8c647ae666 container projected-secret-volume-test: STEP: delete the pod May 16 21:55:44.072: INFO: Waiting for pod pod-projected-secrets-d87e7105-b758-4c42-a9d8-bf8c647ae666 to disappear May 16 21:55:44.088: INFO: Pod pod-projected-secrets-d87e7105-b758-4c42-a9d8-bf8c647ae666 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:55:44.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9290" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2402,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:55:44.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 16 21:55:44.185: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea54a1c4-5d08-4196-8c8f-8d2a0be91135" in namespace "projected-1547" to be "success or failure" May 16 21:55:44.190: INFO: Pod "downwardapi-volume-ea54a1c4-5d08-4196-8c8f-8d2a0be91135": Phase="Pending", Reason="", readiness=false. Elapsed: 4.89801ms May 16 21:55:46.195: INFO: Pod "downwardapi-volume-ea54a1c4-5d08-4196-8c8f-8d2a0be91135": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009635982s May 16 21:55:48.200: INFO: Pod "downwardapi-volume-ea54a1c4-5d08-4196-8c8f-8d2a0be91135": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014026632s STEP: Saw pod success May 16 21:55:48.200: INFO: Pod "downwardapi-volume-ea54a1c4-5d08-4196-8c8f-8d2a0be91135" satisfied condition "success or failure" May 16 21:55:48.203: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ea54a1c4-5d08-4196-8c8f-8d2a0be91135 container client-container: STEP: delete the pod May 16 21:55:48.244: INFO: Waiting for pod downwardapi-volume-ea54a1c4-5d08-4196-8c8f-8d2a0be91135 to disappear May 16 21:55:48.282: INFO: Pod downwardapi-volume-ea54a1c4-5d08-4196-8c8f-8d2a0be91135 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:55:48.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1547" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2405,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:55:48.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 21:55:48.764: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 21:55:50.823: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262948, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262948, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262948, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262948, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 21:55:52.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262948, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262948, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262948, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262948, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 21:55:55.864: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:55:56.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5834" for this suite. STEP: Destroying namespace "webhook-5834-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.105 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":156,"skipped":2413,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:55:56.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 16 21:55:57.561: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4398 /api/v1/namespaces/watch-4398/configmaps/e2e-watch-test-watch-closed 1192e7ac-13b9-44aa-a57d-8ea8d27f4d33 16744992 0 2020-05-16 21:55:57 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 16 21:55:57.561: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4398 /api/v1/namespaces/watch-4398/configmaps/e2e-watch-test-watch-closed 1192e7ac-13b9-44aa-a57d-8ea8d27f4d33 16744995 0 2020-05-16 21:55:57 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 16 21:55:57.646: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4398 /api/v1/namespaces/watch-4398/configmaps/e2e-watch-test-watch-closed 1192e7ac-13b9-44aa-a57d-8ea8d27f4d33 16744998 0 2020-05-16 21:55:57 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 16 21:55:57.646: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4398 /api/v1/namespaces/watch-4398/configmaps/e2e-watch-test-watch-closed 1192e7ac-13b9-44aa-a57d-8ea8d27f4d33 16745000 0 2020-05-16 21:55:57 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:55:57.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4398" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":157,"skipped":2423,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:55:57.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 21:55:58.680: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 21:56:00.690: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262958, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262958, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262958, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262958, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 21:56:02.695: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262958, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262958, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262958, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725262958, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 21:56:05.770: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:56:05.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6286" for this suite. STEP: Destroying namespace "webhook-6286-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.270 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":158,"skipped":2423,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:56:06.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 16 21:56:10.704: INFO: Successfully updated pod "annotationupdateceb4846b-1217-41e7-968b-30bf1a504063" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:56:14.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2808" for this suite. • [SLOW TEST:8.710 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2440,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:56:14.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:56:14.841: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 7.812792ms) May 16 21:56:14.844: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.49973ms) May 16 21:56:14.872: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 27.456243ms) May 16 21:56:14.875: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.025855ms) May 16 21:56:14.877: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.427019ms) May 16 21:56:14.880: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.224287ms) May 16 21:56:14.883: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.124527ms) May 16 21:56:14.885: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.370864ms) May 16 21:56:14.888: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.566781ms) May 16 21:56:14.890: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.295419ms) May 16 21:56:14.893: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.863315ms) May 16 21:56:14.896: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.896469ms) May 16 21:56:14.899: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.004869ms) May 16 21:56:14.902: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.701087ms) May 16 21:56:14.907: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 5.552534ms) May 16 21:56:14.910: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.497438ms) May 16 21:56:14.912: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 1.934555ms) May 16 21:56:14.914: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.006235ms) May 16 21:56:14.916: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 1.920058ms) May 16 21:56:14.918: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.342603ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:56:14.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6979" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":160,"skipped":2500,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:56:14.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0516 21:56:55.394398 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 16 21:56:55.394: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:56:55.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3098" for this suite. • [SLOW TEST:40.478 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":161,"skipped":2502,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:56:55.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1630 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1630 STEP: creating replication controller externalsvc in namespace services-1630 I0516 21:56:55.642546 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-1630, replica count: 2 I0516 21:56:58.692978 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 21:57:01.693441 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 16 21:57:01.817: INFO: Creating new exec pod May 16 21:57:08.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1630 execpodh8rwk -- /bin/sh -x -c nslookup clusterip-service' May 16 21:57:08.409: INFO: stderr: "I0516 21:57:08.249059 2305 log.go:172] (0xc000209080) (0xc00068dc20) Create stream\nI0516 21:57:08.249106 2305 log.go:172] (0xc000209080) (0xc00068dc20) Stream added, broadcasting: 1\nI0516 21:57:08.251240 2305 log.go:172] (0xc000209080) Reply frame received for 1\nI0516 21:57:08.251266 2305 log.go:172] (0xc000209080) (0xc00073a000) Create stream\nI0516 21:57:08.251274 2305 log.go:172] (0xc000209080) (0xc00073a000) Stream added, broadcasting: 3\nI0516 21:57:08.252149 2305 log.go:172] (0xc000209080) Reply frame received for 3\nI0516 21:57:08.252183 2305 log.go:172] (0xc000209080) (0xc00057c000) Create stream\nI0516 21:57:08.252195 2305 log.go:172] (0xc000209080) (0xc00057c000) Stream added, broadcasting: 5\nI0516 21:57:08.252926 2305 log.go:172] (0xc000209080) Reply frame received for 5\nI0516 21:57:08.304018 2305 log.go:172] (0xc000209080) Data frame received for 5\nI0516 21:57:08.304040 2305 log.go:172] (0xc00057c000) (5) Data frame handling\nI0516 21:57:08.304050 2305 log.go:172] (0xc00057c000) (5) Data frame sent\n+ nslookup clusterip-service\nI0516 21:57:08.397570 2305 log.go:172] (0xc000209080) Data frame received for 3\nI0516 21:57:08.397599 2305 log.go:172] (0xc00073a000) (3) Data frame handling\nI0516 21:57:08.397624 2305 log.go:172] (0xc00073a000) (3) Data frame sent\nI0516 21:57:08.399467 2305 log.go:172] (0xc000209080) Data frame received for 3\nI0516 21:57:08.399560 2305 log.go:172] (0xc00073a000) (3) Data frame handling\nI0516 21:57:08.399582 2305 log.go:172] (0xc00073a000) (3) Data frame sent\nI0516 21:57:08.400564 2305 log.go:172] (0xc000209080) Data frame received for 5\nI0516 21:57:08.400622 2305 log.go:172] (0xc00057c000) (5) Data frame handling\nI0516 21:57:08.400653 2305 log.go:172] (0xc000209080) Data frame received for 3\nI0516 21:57:08.400663 2305 log.go:172] (0xc00073a000) (3) Data frame handling\nI0516 21:57:08.402956 2305 log.go:172] (0xc000209080) Data frame received for 1\nI0516 21:57:08.402972 2305 log.go:172] (0xc00068dc20) (1) Data frame handling\nI0516 21:57:08.403003 2305 log.go:172] (0xc00068dc20) (1) Data frame sent\nI0516 21:57:08.403217 2305 log.go:172] (0xc000209080) (0xc00068dc20) Stream removed, broadcasting: 1\nI0516 21:57:08.403289 2305 log.go:172] (0xc000209080) Go away received\nI0516 21:57:08.403578 2305 log.go:172] (0xc000209080) (0xc00068dc20) Stream removed, broadcasting: 1\nI0516 21:57:08.403596 2305 log.go:172] (0xc000209080) (0xc00073a000) Stream removed, broadcasting: 3\nI0516 21:57:08.403605 2305 log.go:172] (0xc000209080) (0xc00057c000) Stream removed, broadcasting: 5\n" May 16 21:57:08.409: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-1630.svc.cluster.local\tcanonical name = externalsvc.services-1630.svc.cluster.local.\nName:\texternalsvc.services-1630.svc.cluster.local\nAddress: 10.109.80.182\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1630, will wait for the garbage collector to delete the pods May 16 21:57:08.468: INFO: Deleting ReplicationController externalsvc took: 6.246193ms May 16 21:57:08.769: INFO: Terminating ReplicationController externalsvc pods took: 300.545599ms May 16 21:57:13.719: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:57:13.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1630" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:18.357 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":162,"skipped":2505,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:57:13.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 16 21:57:13.921: INFO: Waiting up to 5m0s for pod "downward-api-75bd2389-e0d9-4c72-9a67-8b177aece2dc" in namespace "downward-api-5852" to be "success or failure" May 16 21:57:13.948: INFO: Pod "downward-api-75bd2389-e0d9-4c72-9a67-8b177aece2dc": Phase="Pending", Reason="", readiness=false. Elapsed: 26.609799ms May 16 21:57:16.028: INFO: Pod "downward-api-75bd2389-e0d9-4c72-9a67-8b177aece2dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107194356s May 16 21:57:18.032: INFO: Pod "downward-api-75bd2389-e0d9-4c72-9a67-8b177aece2dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111069629s STEP: Saw pod success May 16 21:57:18.032: INFO: Pod "downward-api-75bd2389-e0d9-4c72-9a67-8b177aece2dc" satisfied condition "success or failure" May 16 21:57:18.035: INFO: Trying to get logs from node jerma-worker pod downward-api-75bd2389-e0d9-4c72-9a67-8b177aece2dc container dapi-container: STEP: delete the pod May 16 21:57:18.080: INFO: Waiting for pod downward-api-75bd2389-e0d9-4c72-9a67-8b177aece2dc to disappear May 16 21:57:18.084: INFO: Pod downward-api-75bd2389-e0d9-4c72-9a67-8b177aece2dc no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:57:18.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5852" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2512,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:57:18.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 16 21:57:18.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3260' May 16 21:57:18.393: INFO: stderr: "" May 16 21:57:18.393: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 16 21:57:19.460: INFO: Selector matched 1 pods for map[app:agnhost] May 16 21:57:19.460: INFO: Found 0 / 1 May 16 21:57:20.447: INFO: Selector matched 1 pods for map[app:agnhost] May 16 21:57:20.448: INFO: Found 0 / 1 May 16 21:57:21.398: INFO: Selector matched 1 pods for map[app:agnhost] May 16 21:57:21.398: INFO: Found 0 / 1 May 16 21:57:22.399: INFO: Selector matched 1 pods for map[app:agnhost] May 16 21:57:22.399: INFO: Found 1 / 1 May 16 21:57:22.399: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 16 21:57:22.403: INFO: Selector matched 1 pods for map[app:agnhost] May 16 21:57:22.403: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 16 21:57:22.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-qqgpn --namespace=kubectl-3260 -p {"metadata":{"annotations":{"x":"y"}}}' May 16 21:57:22.518: INFO: stderr: "" May 16 21:57:22.518: INFO: stdout: "pod/agnhost-master-qqgpn patched\n" STEP: checking annotations May 16 21:57:22.521: INFO: Selector matched 1 pods for map[app:agnhost] May 16 21:57:22.521: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:57:22.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3260" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":164,"skipped":2542,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:57:22.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 16 21:57:22.609: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:57:22.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9886" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":165,"skipped":2545,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:57:22.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 16 21:57:22.800: INFO: Waiting up to 5m0s for pod "pod-f1fc0d1f-bd54-4cdf-965e-9dca8c6928a5" in namespace "emptydir-4004" to be "success or failure" May 16 21:57:22.843: INFO: Pod "pod-f1fc0d1f-bd54-4cdf-965e-9dca8c6928a5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.368865ms May 16 21:57:24.846: INFO: Pod "pod-f1fc0d1f-bd54-4cdf-965e-9dca8c6928a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045836832s May 16 21:57:26.851: INFO: Pod "pod-f1fc0d1f-bd54-4cdf-965e-9dca8c6928a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050368687s STEP: Saw pod success May 16 21:57:26.851: INFO: Pod "pod-f1fc0d1f-bd54-4cdf-965e-9dca8c6928a5" satisfied condition "success or failure" May 16 21:57:26.854: INFO: Trying to get logs from node jerma-worker2 pod pod-f1fc0d1f-bd54-4cdf-965e-9dca8c6928a5 container test-container: STEP: delete the pod May 16 21:57:27.099: INFO: Waiting for pod pod-f1fc0d1f-bd54-4cdf-965e-9dca8c6928a5 to disappear May 16 21:57:27.214: INFO: Pod pod-f1fc0d1f-bd54-4cdf-965e-9dca8c6928a5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:57:27.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4004" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2584,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:57:27.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:57:27.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 16 21:57:27.976: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-16T21:57:27Z generation:1 name:name1 resourceVersion:16745735 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6e9640df-d48d-454a-84de-450a711aff5b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 16 21:57:37.981: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-16T21:57:37Z generation:1 name:name2 resourceVersion:16745777 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:83d8aac8-5a96-4028-81a2-f6d02228c868] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 16 21:57:47.991: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-16T21:57:27Z generation:2 name:name1 resourceVersion:16745810 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6e9640df-d48d-454a-84de-450a711aff5b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 16 21:57:58.003: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-16T21:57:37Z generation:2 name:name2 resourceVersion:16745842 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:83d8aac8-5a96-4028-81a2-f6d02228c868] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 16 21:58:08.010: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-16T21:57:27Z generation:2 name:name1 resourceVersion:16745871 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6e9640df-d48d-454a-84de-450a711aff5b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 16 21:58:18.023: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-16T21:57:37Z generation:2 name:name2 resourceVersion:16745902 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:83d8aac8-5a96-4028-81a2-f6d02228c868] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:58:28.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-4884" for this suite. • [SLOW TEST:61.335 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":167,"skipped":2597,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:58:28.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 16 21:58:33.214: INFO: Successfully updated pod "adopt-release-r99bc" STEP: Checking that the Job readopts the Pod May 16 21:58:33.214: INFO: Waiting up to 15m0s for pod "adopt-release-r99bc" in namespace "job-470" to be "adopted" May 16 21:58:33.218: INFO: Pod "adopt-release-r99bc": Phase="Running", Reason="", readiness=true. Elapsed: 3.82056ms May 16 21:58:35.222: INFO: Pod "adopt-release-r99bc": Phase="Running", Reason="", readiness=true. Elapsed: 2.00756112s May 16 21:58:35.222: INFO: Pod "adopt-release-r99bc" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 16 21:58:35.731: INFO: Successfully updated pod "adopt-release-r99bc" STEP: Checking that the Job releases the Pod May 16 21:58:35.731: INFO: Waiting up to 15m0s for pod "adopt-release-r99bc" in namespace "job-470" to be "released" May 16 21:58:35.735: INFO: Pod "adopt-release-r99bc": Phase="Running", Reason="", readiness=true. Elapsed: 3.455521ms May 16 21:58:37.740: INFO: Pod "adopt-release-r99bc": Phase="Running", Reason="", readiness=true. Elapsed: 2.00861467s May 16 21:58:37.740: INFO: Pod "adopt-release-r99bc" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:58:37.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-470" for this suite. • [SLOW TEST:9.162 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":168,"skipped":2625,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:58:37.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3884.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3884.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3884.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3884.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 21:58:44.256: INFO: DNS probes using dns-test-3c3bd5af-dad1-4ef5-9e6f-195df84737e8 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3884.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3884.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3884.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3884.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 21:58:52.382: INFO: File wheezy_udp@dns-test-service-3.dns-3884.svc.cluster.local from pod dns-3884/dns-test-f7908aa9-7ef1-44ab-942b-0c5c8fac75f6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 21:58:52.386: INFO: File jessie_udp@dns-test-service-3.dns-3884.svc.cluster.local from pod dns-3884/dns-test-f7908aa9-7ef1-44ab-942b-0c5c8fac75f6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 21:58:52.386: INFO: Lookups using dns-3884/dns-test-f7908aa9-7ef1-44ab-942b-0c5c8fac75f6 failed for: [wheezy_udp@dns-test-service-3.dns-3884.svc.cluster.local jessie_udp@dns-test-service-3.dns-3884.svc.cluster.local] May 16 21:58:57.405: INFO: File wheezy_udp@dns-test-service-3.dns-3884.svc.cluster.local from pod dns-3884/dns-test-f7908aa9-7ef1-44ab-942b-0c5c8fac75f6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 21:58:57.422: INFO: File jessie_udp@dns-test-service-3.dns-3884.svc.cluster.local from pod dns-3884/dns-test-f7908aa9-7ef1-44ab-942b-0c5c8fac75f6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 21:58:57.422: INFO: Lookups using dns-3884/dns-test-f7908aa9-7ef1-44ab-942b-0c5c8fac75f6 failed for: [wheezy_udp@dns-test-service-3.dns-3884.svc.cluster.local jessie_udp@dns-test-service-3.dns-3884.svc.cluster.local] May 16 21:59:02.392: INFO: File wheezy_udp@dns-test-service-3.dns-3884.svc.cluster.local from pod dns-3884/dns-test-f7908aa9-7ef1-44ab-942b-0c5c8fac75f6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 21:59:02.396: INFO: File jessie_udp@dns-test-service-3.dns-3884.svc.cluster.local from pod dns-3884/dns-test-f7908aa9-7ef1-44ab-942b-0c5c8fac75f6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 21:59:02.396: INFO: Lookups using dns-3884/dns-test-f7908aa9-7ef1-44ab-942b-0c5c8fac75f6 failed for: [wheezy_udp@dns-test-service-3.dns-3884.svc.cluster.local jessie_udp@dns-test-service-3.dns-3884.svc.cluster.local] May 16 21:59:07.391: INFO: File wheezy_udp@dns-test-service-3.dns-3884.svc.cluster.local from pod dns-3884/dns-test-f7908aa9-7ef1-44ab-942b-0c5c8fac75f6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 21:59:07.395: INFO: File jessie_udp@dns-test-service-3.dns-3884.svc.cluster.local from pod dns-3884/dns-test-f7908aa9-7ef1-44ab-942b-0c5c8fac75f6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 21:59:07.395: INFO: Lookups using dns-3884/dns-test-f7908aa9-7ef1-44ab-942b-0c5c8fac75f6 failed for: [wheezy_udp@dns-test-service-3.dns-3884.svc.cluster.local jessie_udp@dns-test-service-3.dns-3884.svc.cluster.local] May 16 21:59:12.395: INFO: File jessie_udp@dns-test-service-3.dns-3884.svc.cluster.local from pod dns-3884/dns-test-f7908aa9-7ef1-44ab-942b-0c5c8fac75f6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 21:59:12.396: INFO: Lookups using dns-3884/dns-test-f7908aa9-7ef1-44ab-942b-0c5c8fac75f6 failed for: [jessie_udp@dns-test-service-3.dns-3884.svc.cluster.local] May 16 21:59:17.394: INFO: DNS probes using dns-test-f7908aa9-7ef1-44ab-942b-0c5c8fac75f6 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3884.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3884.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3884.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3884.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 21:59:23.607: INFO: DNS probes using dns-test-2991b914-b03a-4bba-98b2-1a1e57d346da succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:59:23.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3884" for this suite. • [SLOW TEST:45.956 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":169,"skipped":2627,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:59:23.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 21:59:23.864: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:59:28.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3501" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2648,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:59:28.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 21:59:29.384: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 21:59:31.396: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725263169, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725263169, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725263169, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725263169, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 21:59:34.430: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:59:35.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6232" for this suite. STEP: Destroying namespace "webhook-6232-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.853 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":171,"skipped":2657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:59:35.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 16 21:59:35.691: INFO: created pod pod-service-account-defaultsa May 16 21:59:35.691: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 16 21:59:35.917: INFO: created pod pod-service-account-mountsa May 16 21:59:35.917: INFO: pod pod-service-account-mountsa service account token volume mount: true May 16 21:59:35.951: INFO: created pod pod-service-account-nomountsa May 16 21:59:35.951: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 16 21:59:35.996: INFO: created pod pod-service-account-defaultsa-mountspec May 16 21:59:35.996: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 16 21:59:36.065: INFO: created pod pod-service-account-mountsa-mountspec May 16 21:59:36.065: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 16 21:59:36.092: INFO: created pod pod-service-account-nomountsa-mountspec May 16 21:59:36.092: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 16 21:59:36.108: INFO: created pod pod-service-account-defaultsa-nomountspec May 16 21:59:36.108: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 16 21:59:36.385: INFO: created pod pod-service-account-mountsa-nomountspec May 16 21:59:36.385: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 16 21:59:36.389: INFO: created pod pod-service-account-nomountsa-nomountspec May 16 21:59:36.389: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 21:59:36.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6598" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":172,"skipped":2682,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 21:59:36.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7927 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7927 I0516 21:59:37.190545 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7927, replica count: 2 I0516 21:59:40.240962 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 21:59:43.241336 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 21:59:46.241524 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 21:59:49.241709 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 21:59:52.241912 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 16 21:59:52.241: INFO: Creating new exec pod May 16 21:59:57.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7927 execpodfp85v -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 16 22:00:00.454: INFO: stderr: "I0516 22:00:00.324229 2384 log.go:172] (0xc000808000) (0xc00085e1e0) Create stream\nI0516 22:00:00.324277 2384 log.go:172] (0xc000808000) (0xc00085e1e0) Stream added, broadcasting: 1\nI0516 22:00:00.327849 2384 log.go:172] (0xc000808000) Reply frame received for 1\nI0516 22:00:00.327931 2384 log.go:172] (0xc000808000) (0xc00085e280) Create stream\nI0516 22:00:00.327949 2384 log.go:172] (0xc000808000) (0xc00085e280) Stream added, broadcasting: 3\nI0516 22:00:00.329771 2384 log.go:172] (0xc000808000) Reply frame received for 3\nI0516 22:00:00.329810 2384 log.go:172] (0xc000808000) (0xc000844000) Create stream\nI0516 22:00:00.329822 2384 log.go:172] (0xc000808000) (0xc000844000) Stream added, broadcasting: 5\nI0516 22:00:00.330809 2384 log.go:172] (0xc000808000) Reply frame received for 5\nI0516 22:00:00.423607 2384 log.go:172] (0xc000808000) Data frame received for 5\nI0516 22:00:00.423640 2384 log.go:172] (0xc000844000) (5) Data frame handling\nI0516 22:00:00.423664 2384 log.go:172] (0xc000844000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0516 22:00:00.442709 2384 log.go:172] (0xc000808000) Data frame received for 5\nI0516 22:00:00.442753 2384 log.go:172] (0xc000844000) (5) Data frame handling\nI0516 22:00:00.442820 2384 log.go:172] (0xc000844000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0516 22:00:00.443091 2384 log.go:172] (0xc000808000) Data frame received for 3\nI0516 22:00:00.443116 2384 log.go:172] (0xc00085e280) (3) Data frame handling\nI0516 22:00:00.443209 2384 log.go:172] (0xc000808000) Data frame received for 5\nI0516 22:00:00.443224 2384 log.go:172] (0xc000844000) (5) Data frame handling\nI0516 22:00:00.445061 2384 log.go:172] (0xc000808000) Data frame received for 1\nI0516 22:00:00.445095 2384 log.go:172] (0xc00085e1e0) (1) Data frame handling\nI0516 22:00:00.445312 2384 log.go:172] (0xc00085e1e0) (1) Data frame sent\nI0516 22:00:00.445353 2384 log.go:172] (0xc000808000) (0xc00085e1e0) Stream removed, broadcasting: 1\nI0516 22:00:00.445386 2384 log.go:172] (0xc000808000) Go away received\nI0516 22:00:00.445839 2384 log.go:172] (0xc000808000) (0xc00085e1e0) Stream removed, broadcasting: 1\nI0516 22:00:00.445863 2384 log.go:172] (0xc000808000) (0xc00085e280) Stream removed, broadcasting: 3\nI0516 22:00:00.445873 2384 log.go:172] (0xc000808000) (0xc000844000) Stream removed, broadcasting: 5\n" May 16 22:00:00.454: INFO: stdout: "" May 16 22:00:00.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7927 execpodfp85v -- /bin/sh -x -c nc -zv -t -w 2 10.100.65.137 80' May 16 22:00:00.662: INFO: stderr: "I0516 22:00:00.592969 2418 log.go:172] (0xc0009d0a50) (0xc0006ce140) Create stream\nI0516 22:00:00.593031 2418 log.go:172] (0xc0009d0a50) (0xc0006ce140) Stream added, broadcasting: 1\nI0516 22:00:00.595470 2418 log.go:172] (0xc0009d0a50) Reply frame received for 1\nI0516 22:00:00.595502 2418 log.go:172] (0xc0009d0a50) (0xc000453a40) Create stream\nI0516 22:00:00.595521 2418 log.go:172] (0xc0009d0a50) (0xc000453a40) Stream added, broadcasting: 3\nI0516 22:00:00.596367 2418 log.go:172] (0xc0009d0a50) Reply frame received for 3\nI0516 22:00:00.596424 2418 log.go:172] (0xc0009d0a50) (0xc000504000) Create stream\nI0516 22:00:00.596445 2418 log.go:172] (0xc0009d0a50) (0xc000504000) Stream added, broadcasting: 5\nI0516 22:00:00.597768 2418 log.go:172] (0xc0009d0a50) Reply frame received for 5\nI0516 22:00:00.654264 2418 log.go:172] (0xc0009d0a50) Data frame received for 3\nI0516 22:00:00.654311 2418 log.go:172] (0xc000453a40) (3) Data frame handling\nI0516 22:00:00.654337 2418 log.go:172] (0xc0009d0a50) Data frame received for 5\nI0516 22:00:00.654360 2418 log.go:172] (0xc000504000) (5) Data frame handling\nI0516 22:00:00.654375 2418 log.go:172] (0xc000504000) (5) Data frame sent\nI0516 22:00:00.654387 2418 log.go:172] (0xc0009d0a50) Data frame received for 5\nI0516 22:00:00.654395 2418 log.go:172] (0xc000504000) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.65.137 80\nConnection to 10.100.65.137 80 port [tcp/http] succeeded!\nI0516 22:00:00.655752 2418 log.go:172] (0xc0009d0a50) Data frame received for 1\nI0516 22:00:00.655789 2418 log.go:172] (0xc0006ce140) (1) Data frame handling\nI0516 22:00:00.655805 2418 log.go:172] (0xc0006ce140) (1) Data frame sent\nI0516 22:00:00.655824 2418 log.go:172] (0xc0009d0a50) (0xc0006ce140) Stream removed, broadcasting: 1\nI0516 22:00:00.655852 2418 log.go:172] (0xc0009d0a50) Go away received\nI0516 22:00:00.656100 2418 log.go:172] (0xc0009d0a50) (0xc0006ce140) Stream removed, broadcasting: 1\nI0516 22:00:00.656115 2418 log.go:172] (0xc0009d0a50) (0xc000453a40) Stream removed, broadcasting: 3\nI0516 22:00:00.656123 2418 log.go:172] (0xc0009d0a50) (0xc000504000) Stream removed, broadcasting: 5\n" May 16 22:00:00.662: INFO: stdout: "" May 16 22:00:00.662: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:00:00.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7927" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:24.093 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":173,"skipped":2690,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:00:00.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 22:00:01.404: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 22:00:03.532: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725263201, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725263201, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725263201, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725263201, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 22:00:06.560: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:00:06.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-562" for this suite. STEP: Destroying namespace "webhook-562-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.402 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":174,"skipped":2714,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:00:07.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:00:23.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7156" for this suite. • [SLOW TEST:16.276 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":175,"skipped":2719,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:00:23.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-31eb0e3a-609f-4d4c-903c-78d7b6db6759 STEP: Creating a pod to test consume secrets May 16 22:00:23.515: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1c953e13-bbf7-47ae-9e78-a9c3612a7a74" in namespace "projected-2180" to be "success or failure" May 16 22:00:23.534: INFO: Pod "pod-projected-secrets-1c953e13-bbf7-47ae-9e78-a9c3612a7a74": Phase="Pending", Reason="", readiness=false. Elapsed: 18.771388ms May 16 22:00:25.539: INFO: Pod "pod-projected-secrets-1c953e13-bbf7-47ae-9e78-a9c3612a7a74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023743517s May 16 22:00:27.543: INFO: Pod "pod-projected-secrets-1c953e13-bbf7-47ae-9e78-a9c3612a7a74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02753793s STEP: Saw pod success May 16 22:00:27.543: INFO: Pod "pod-projected-secrets-1c953e13-bbf7-47ae-9e78-a9c3612a7a74" satisfied condition "success or failure" May 16 22:00:27.546: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-1c953e13-bbf7-47ae-9e78-a9c3612a7a74 container projected-secret-volume-test: STEP: delete the pod May 16 22:00:27.576: INFO: Waiting for pod pod-projected-secrets-1c953e13-bbf7-47ae-9e78-a9c3612a7a74 to disappear May 16 22:00:27.593: INFO: Pod pod-projected-secrets-1c953e13-bbf7-47ae-9e78-a9c3612a7a74 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:00:27.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2180" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2724,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:00:27.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 16 22:00:27.692: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2940 /api/v1/namespaces/watch-2940/configmaps/e2e-watch-test-resource-version 962b740f-849a-412b-a6db-455d865be917 16746862 0 2020-05-16 22:00:27 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 16 22:00:27.692: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2940 /api/v1/namespaces/watch-2940/configmaps/e2e-watch-test-resource-version 962b740f-849a-412b-a6db-455d865be917 16746863 0 2020-05-16 22:00:27 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:00:27.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2940" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":177,"skipped":2747,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:00:27.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 16 22:00:27.764: INFO: Created pod &Pod{ObjectMeta:{dns-9666 dns-9666 /api/v1/namespaces/dns-9666/pods/dns-9666 1cc28f9c-2c22-49b2-bdf2-839ef6326d96 16746869 0 2020-05-16 22:00:27 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dqbdm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dqbdm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dqbdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 16 22:00:31.801: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9666 PodName:dns-9666 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 22:00:31.801: INFO: >>> kubeConfig: /root/.kube/config I0516 22:00:31.839917 6 log.go:172] (0xc000bd2a50) (0xc000c79d60) Create stream I0516 22:00:31.839959 6 log.go:172] (0xc000bd2a50) (0xc000c79d60) Stream added, broadcasting: 1 I0516 22:00:31.842060 6 log.go:172] (0xc000bd2a50) Reply frame received for 1 I0516 22:00:31.842108 6 log.go:172] (0xc000bd2a50) (0xc000ae79a0) Create stream I0516 22:00:31.842121 6 log.go:172] (0xc000bd2a50) (0xc000ae79a0) Stream added, broadcasting: 3 I0516 22:00:31.843283 6 log.go:172] (0xc000bd2a50) Reply frame received for 3 I0516 22:00:31.843334 6 log.go:172] (0xc000bd2a50) (0xc000c79f40) Create stream I0516 22:00:31.843350 6 log.go:172] (0xc000bd2a50) (0xc000c79f40) Stream added, broadcasting: 5 I0516 22:00:31.844279 6 log.go:172] (0xc000bd2a50) Reply frame received for 5 I0516 22:00:31.944258 6 log.go:172] (0xc000bd2a50) Data frame received for 3 I0516 22:00:31.944286 6 log.go:172] (0xc000ae79a0) (3) Data frame handling I0516 22:00:31.944300 6 log.go:172] (0xc000ae79a0) (3) Data frame sent I0516 22:00:31.945690 6 log.go:172] (0xc000bd2a50) Data frame received for 3 I0516 22:00:31.945713 6 log.go:172] (0xc000ae79a0) (3) Data frame handling I0516 22:00:31.945736 6 log.go:172] (0xc000bd2a50) Data frame received for 5 I0516 22:00:31.945755 6 log.go:172] (0xc000c79f40) (5) Data frame handling I0516 22:00:31.947305 6 log.go:172] (0xc000bd2a50) Data frame received for 1 I0516 22:00:31.947323 6 log.go:172] (0xc000c79d60) (1) Data frame handling I0516 22:00:31.947332 6 log.go:172] (0xc000c79d60) (1) Data frame sent I0516 22:00:31.947360 6 log.go:172] (0xc000bd2a50) (0xc000c79d60) Stream removed, broadcasting: 1 I0516 22:00:31.947388 6 log.go:172] (0xc000bd2a50) Go away received I0516 22:00:31.947474 6 log.go:172] (0xc000bd2a50) (0xc000c79d60) Stream removed, broadcasting: 1 I0516 22:00:31.947496 6 log.go:172] (0xc000bd2a50) (0xc000ae79a0) Stream removed, broadcasting: 3 I0516 22:00:31.947511 6 log.go:172] (0xc000bd2a50) (0xc000c79f40) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 16 22:00:31.947: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9666 PodName:dns-9666 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 22:00:31.947: INFO: >>> kubeConfig: /root/.kube/config I0516 22:00:31.976227 6 log.go:172] (0xc0025fe4d0) (0xc000ae7cc0) Create stream I0516 22:00:31.976248 6 log.go:172] (0xc0025fe4d0) (0xc000ae7cc0) Stream added, broadcasting: 1 I0516 22:00:31.978262 6 log.go:172] (0xc0025fe4d0) Reply frame received for 1 I0516 22:00:31.978309 6 log.go:172] (0xc0025fe4d0) (0xc001bf2140) Create stream I0516 22:00:31.978321 6 log.go:172] (0xc0025fe4d0) (0xc001bf2140) Stream added, broadcasting: 3 I0516 22:00:31.979441 6 log.go:172] (0xc0025fe4d0) Reply frame received for 3 I0516 22:00:31.979485 6 log.go:172] (0xc0025fe4d0) (0xc000d96fa0) Create stream I0516 22:00:31.979500 6 log.go:172] (0xc0025fe4d0) (0xc000d96fa0) Stream added, broadcasting: 5 I0516 22:00:31.980672 6 log.go:172] (0xc0025fe4d0) Reply frame received for 5 I0516 22:00:32.057959 6 log.go:172] (0xc0025fe4d0) Data frame received for 3 I0516 22:00:32.057982 6 log.go:172] (0xc001bf2140) (3) Data frame handling I0516 22:00:32.057997 6 log.go:172] (0xc001bf2140) (3) Data frame sent I0516 22:00:32.059811 6 log.go:172] (0xc0025fe4d0) Data frame received for 5 I0516 22:00:32.059843 6 log.go:172] (0xc000d96fa0) (5) Data frame handling I0516 22:00:32.059866 6 log.go:172] (0xc0025fe4d0) Data frame received for 3 I0516 22:00:32.059880 6 log.go:172] (0xc001bf2140) (3) Data frame handling I0516 22:00:32.061325 6 log.go:172] (0xc0025fe4d0) Data frame received for 1 I0516 22:00:32.061353 6 log.go:172] (0xc000ae7cc0) (1) Data frame handling I0516 22:00:32.061368 6 log.go:172] (0xc000ae7cc0) (1) Data frame sent I0516 22:00:32.061381 6 log.go:172] (0xc0025fe4d0) (0xc000ae7cc0) Stream removed, broadcasting: 1 I0516 22:00:32.061396 6 log.go:172] (0xc0025fe4d0) Go away received I0516 22:00:32.061607 6 log.go:172] (0xc0025fe4d0) (0xc000ae7cc0) Stream removed, broadcasting: 1 I0516 22:00:32.061655 6 log.go:172] (0xc0025fe4d0) (0xc001bf2140) Stream removed, broadcasting: 3 I0516 22:00:32.061676 6 log.go:172] (0xc0025fe4d0) (0xc000d96fa0) Stream removed, broadcasting: 5 May 16 22:00:32.061: INFO: Deleting pod dns-9666... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:00:32.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9666" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":178,"skipped":2780,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:00:32.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2410.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2410.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2410.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2410.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2410.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2410.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 22:00:38.647: INFO: DNS probes using dns-2410/dns-test-b542f8a2-c4f5-4a0d-b462-1c132092f70d succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:00:38.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2410" for this suite. • [SLOW TEST:6.658 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":179,"skipped":2792,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:00:38.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6392.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6392.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6392.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6392.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6392.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6392.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 22:00:45.264: INFO: DNS probes using dns-6392/dns-test-429cda8d-1055-4588-b44e-90b30338a9a4 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:00:45.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6392" for this suite. • [SLOW TEST:6.568 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":180,"skipped":2800,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:00:45.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 16 22:00:50.540: INFO: Successfully updated pod "pod-update-53ffca84-5dbc-457c-a6c7-368c390aedf0" STEP: verifying the updated pod is in kubernetes May 16 22:00:50.569: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:00:50.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9825" for this suite. • [SLOW TEST:5.227 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2818,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:00:50.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 16 22:00:50.671: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:00:59.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5161" for this suite. • [SLOW TEST:8.508 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":182,"skipped":2827,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:00:59.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 16 22:00:59.226: INFO: Waiting up to 5m0s for pod "pod-c615e654-9957-4be9-aadb-c929c7c43d5c" in namespace "emptydir-865" to be "success or failure" May 16 22:00:59.253: INFO: Pod "pod-c615e654-9957-4be9-aadb-c929c7c43d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.559326ms May 16 22:01:01.327: INFO: Pod "pod-c615e654-9957-4be9-aadb-c929c7c43d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100674942s May 16 22:01:03.397: INFO: Pod "pod-c615e654-9957-4be9-aadb-c929c7c43d5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.17100161s STEP: Saw pod success May 16 22:01:03.397: INFO: Pod "pod-c615e654-9957-4be9-aadb-c929c7c43d5c" satisfied condition "success or failure" May 16 22:01:03.400: INFO: Trying to get logs from node jerma-worker2 pod pod-c615e654-9957-4be9-aadb-c929c7c43d5c container test-container: STEP: delete the pod May 16 22:01:03.486: INFO: Waiting for pod pod-c615e654-9957-4be9-aadb-c929c7c43d5c to disappear May 16 22:01:03.492: INFO: Pod pod-c615e654-9957-4be9-aadb-c929c7c43d5c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:01:03.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-865" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":2835,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:01:03.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 16 22:01:03.591: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24ec5550-ce9b-4471-9fbd-e56708e8297b" in namespace "projected-3381" to be "success or failure" May 16 22:01:03.654: INFO: Pod "downwardapi-volume-24ec5550-ce9b-4471-9fbd-e56708e8297b": Phase="Pending", Reason="", readiness=false. Elapsed: 63.328282ms May 16 22:01:05.678: INFO: Pod "downwardapi-volume-24ec5550-ce9b-4471-9fbd-e56708e8297b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087266975s May 16 22:01:07.682: INFO: Pod "downwardapi-volume-24ec5550-ce9b-4471-9fbd-e56708e8297b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091633285s STEP: Saw pod success May 16 22:01:07.683: INFO: Pod "downwardapi-volume-24ec5550-ce9b-4471-9fbd-e56708e8297b" satisfied condition "success or failure" May 16 22:01:07.686: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-24ec5550-ce9b-4471-9fbd-e56708e8297b container client-container: STEP: delete the pod May 16 22:01:07.709: INFO: Waiting for pod downwardapi-volume-24ec5550-ce9b-4471-9fbd-e56708e8297b to disappear May 16 22:01:07.714: INFO: Pod downwardapi-volume-24ec5550-ce9b-4471-9fbd-e56708e8297b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:01:07.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3381" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2847,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:01:07.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 16 22:01:07.811: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:01:22.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-861" for this suite. • [SLOW TEST:14.655 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":185,"skipped":2847,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:01:22.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 22:01:22.937: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 22:01:24.943: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725263282, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725263282, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725263283, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725263282, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 22:01:27.967: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:01:28.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5983" for this suite. STEP: Destroying namespace "webhook-5983-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.760 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":186,"skipped":2862,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:01:28.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 16 22:01:36.294: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 22:01:36.312: INFO: Pod pod-with-poststart-http-hook still exists May 16 22:01:38.312: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 22:01:38.317: INFO: Pod pod-with-poststart-http-hook still exists May 16 22:01:40.312: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 22:01:40.316: INFO: Pod pod-with-poststart-http-hook still exists May 16 22:01:42.312: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 22:01:42.316: INFO: Pod pod-with-poststart-http-hook still exists May 16 22:01:44.312: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 22:01:44.316: INFO: Pod pod-with-poststart-http-hook still exists May 16 22:01:46.312: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 22:01:46.317: INFO: Pod pod-with-poststart-http-hook still exists May 16 22:01:48.312: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 22:01:48.317: INFO: Pod pod-with-poststart-http-hook still exists May 16 22:01:50.312: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 22:01:50.321: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:01:50.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-908" for this suite. • [SLOW TEST:22.206 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":2879,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:01:50.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 16 22:01:54.547: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:01:54.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6884" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":2919,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:01:54.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 16 22:01:54.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-254' May 16 22:01:54.767: INFO: stderr: "" May 16 22:01:54.767: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 16 22:01:59.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-254 -o json' May 16 22:01:59.915: INFO: stderr: "" May 16 22:01:59.915: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-16T22:01:54Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-254\",\n \"resourceVersion\": \"16747540\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-254/pods/e2e-test-httpd-pod\",\n \"uid\": \"9b9799cf-bd69-4e73-a5c0-e600c6a942e9\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-rhl8p\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-rhl8p\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-rhl8p\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-16T22:01:54Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-16T22:01:58Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-16T22:01:58Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-16T22:01:54Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://4a11197be38dfdea0e515d957a8df2a897f4dae37e646aeff12abcee5866fe8f\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-16T22:01:57Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.182\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.182\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-16T22:01:54Z\"\n }\n}\n" STEP: replace the image in the pod May 16 22:01:59.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-254' May 16 22:02:00.224: INFO: stderr: "" May 16 22:02:00.224: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 16 22:02:00.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-254' May 16 22:02:09.530: INFO: stderr: "" May 16 22:02:09.530: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:02:09.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-254" for this suite. • [SLOW TEST:14.944 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":189,"skipped":2942,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:02:09.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:02:14.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2960" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":190,"skipped":2965,"failed":0} ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:02:14.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:02:44.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1313" for this suite. • [SLOW TEST:30.568 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":2965,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:02:44.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2479 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2479 STEP: Creating statefulset with conflicting port in namespace statefulset-2479 STEP: Waiting until pod test-pod will start running in namespace statefulset-2479 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2479 May 16 22:02:49.082: INFO: Observed stateful pod in namespace: statefulset-2479, name: ss-0, uid: ff0b6ada-9b34-408a-b1a0-80a4106968a3, status phase: Pending. Waiting for statefulset controller to delete. May 16 22:02:49.395: INFO: Observed stateful pod in namespace: statefulset-2479, name: ss-0, uid: ff0b6ada-9b34-408a-b1a0-80a4106968a3, status phase: Failed. Waiting for statefulset controller to delete. May 16 22:02:49.407: INFO: Observed stateful pod in namespace: statefulset-2479, name: ss-0, uid: ff0b6ada-9b34-408a-b1a0-80a4106968a3, status phase: Failed. Waiting for statefulset controller to delete. May 16 22:02:49.435: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2479 STEP: Removing pod with conflicting port in namespace statefulset-2479 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2479 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 16 22:02:53.514: INFO: Deleting all statefulset in ns statefulset-2479 May 16 22:02:53.517: INFO: Scaling statefulset ss to 0 May 16 22:03:03.542: INFO: Waiting for statefulset status.replicas updated to 0 May 16 22:03:03.544: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:03:03.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2479" for this suite. • [SLOW TEST:18.663 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":192,"skipped":2966,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:03:03.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 16 22:03:03.618: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 16 22:03:12.720: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:03:12.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6928" for this suite. • [SLOW TEST:9.168 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":2976,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:03:12.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 16 22:03:17.351: INFO: Successfully updated pod "labelsupdatefe51bbdc-ab65-4b51-98c3-3ea962c99163" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:03:19.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2462" for this suite. • [SLOW TEST:6.654 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3004,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:03:19.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-31cc1eda-f6d1-45ca-95a9-7bbc7b884e87 in namespace container-probe-9676 May 16 22:03:23.503: INFO: Started pod busybox-31cc1eda-f6d1-45ca-95a9-7bbc7b884e87 in namespace container-probe-9676 STEP: checking the pod's current state and verifying that restartCount is present May 16 22:03:23.507: INFO: Initial restart count of pod busybox-31cc1eda-f6d1-45ca-95a9-7bbc7b884e87 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:07:24.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9676" for this suite. • [SLOW TEST:245.047 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3021,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:07:24.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-4021a3c1-9f3a-4785-b405-c85db491a042 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:07:24.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1658" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":196,"skipped":3086,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:07:24.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9826 STEP: creating a selector STEP: Creating the service pods in kubernetes May 16 22:07:24.566: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 16 22:07:54.669: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.132 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9826 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 22:07:54.669: INFO: >>> kubeConfig: /root/.kube/config I0516 22:07:54.706881 6 log.go:172] (0xc002517a20) (0xc0016668c0) Create stream I0516 22:07:54.706928 6 log.go:172] (0xc002517a20) (0xc0016668c0) Stream added, broadcasting: 1 I0516 22:07:54.708916 6 log.go:172] (0xc002517a20) Reply frame received for 1 I0516 22:07:54.708966 6 log.go:172] (0xc002517a20) (0xc001666960) Create stream I0516 22:07:54.708981 6 log.go:172] (0xc002517a20) (0xc001666960) Stream added, broadcasting: 3 I0516 22:07:54.710435 6 log.go:172] (0xc002517a20) Reply frame received for 3 I0516 22:07:54.710486 6 log.go:172] (0xc002517a20) (0xc000f47e00) Create stream I0516 22:07:54.710500 6 log.go:172] (0xc002517a20) (0xc000f47e00) Stream added, broadcasting: 5 I0516 22:07:54.711401 6 log.go:172] (0xc002517a20) Reply frame received for 5 I0516 22:07:55.790693 6 log.go:172] (0xc002517a20) Data frame received for 5 I0516 22:07:55.790740 6 log.go:172] (0xc000f47e00) (5) Data frame handling I0516 22:07:55.790790 6 log.go:172] (0xc002517a20) Data frame received for 3 I0516 22:07:55.790883 6 log.go:172] (0xc001666960) (3) Data frame handling I0516 22:07:55.790952 6 log.go:172] (0xc001666960) (3) Data frame sent I0516 22:07:55.792415 6 log.go:172] (0xc002517a20) Data frame received for 3 I0516 22:07:55.792446 6 log.go:172] (0xc001666960) (3) Data frame handling I0516 22:07:55.816577 6 log.go:172] (0xc002517a20) Data frame received for 1 I0516 22:07:55.816600 6 log.go:172] (0xc0016668c0) (1) Data frame handling I0516 22:07:55.816607 6 log.go:172] (0xc0016668c0) (1) Data frame sent I0516 22:07:55.816615 6 log.go:172] (0xc002517a20) (0xc0016668c0) Stream removed, broadcasting: 1 I0516 22:07:55.816638 6 log.go:172] (0xc002517a20) Go away received I0516 22:07:55.816696 6 log.go:172] (0xc002517a20) (0xc0016668c0) Stream removed, broadcasting: 1 I0516 22:07:55.816710 6 log.go:172] (0xc002517a20) (0xc001666960) Stream removed, broadcasting: 3 I0516 22:07:55.816715 6 log.go:172] (0xc002517a20) (0xc000f47e00) Stream removed, broadcasting: 5 May 16 22:07:55.816: INFO: Found all expected endpoints: [netserver-0] May 16 22:07:55.819: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.185 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9826 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 22:07:55.819: INFO: >>> kubeConfig: /root/.kube/config I0516 22:07:55.841564 6 log.go:172] (0xc002ca4210) (0xc0016670e0) Create stream I0516 22:07:55.841589 6 log.go:172] (0xc002ca4210) (0xc0016670e0) Stream added, broadcasting: 1 I0516 22:07:55.843269 6 log.go:172] (0xc002ca4210) Reply frame received for 1 I0516 22:07:55.843354 6 log.go:172] (0xc002ca4210) (0xc0016ce000) Create stream I0516 22:07:55.843380 6 log.go:172] (0xc002ca4210) (0xc0016ce000) Stream added, broadcasting: 3 I0516 22:07:55.844228 6 log.go:172] (0xc002ca4210) Reply frame received for 3 I0516 22:07:55.844259 6 log.go:172] (0xc002ca4210) (0xc0016672c0) Create stream I0516 22:07:55.844272 6 log.go:172] (0xc002ca4210) (0xc0016672c0) Stream added, broadcasting: 5 I0516 22:07:55.845089 6 log.go:172] (0xc002ca4210) Reply frame received for 5 I0516 22:07:56.895195 6 log.go:172] (0xc002ca4210) Data frame received for 3 I0516 22:07:56.895244 6 log.go:172] (0xc0016ce000) (3) Data frame handling I0516 22:07:56.895274 6 log.go:172] (0xc0016ce000) (3) Data frame sent I0516 22:07:56.895290 6 log.go:172] (0xc002ca4210) Data frame received for 3 I0516 22:07:56.895303 6 log.go:172] (0xc0016ce000) (3) Data frame handling I0516 22:07:56.895409 6 log.go:172] (0xc002ca4210) Data frame received for 5 I0516 22:07:56.895429 6 log.go:172] (0xc0016672c0) (5) Data frame handling I0516 22:07:56.897830 6 log.go:172] (0xc002ca4210) Data frame received for 1 I0516 22:07:56.897867 6 log.go:172] (0xc0016670e0) (1) Data frame handling I0516 22:07:56.897902 6 log.go:172] (0xc0016670e0) (1) Data frame sent I0516 22:07:56.897936 6 log.go:172] (0xc002ca4210) (0xc0016670e0) Stream removed, broadcasting: 1 I0516 22:07:56.897978 6 log.go:172] (0xc002ca4210) Go away received I0516 22:07:56.898038 6 log.go:172] (0xc002ca4210) (0xc0016670e0) Stream removed, broadcasting: 1 I0516 22:07:56.898068 6 log.go:172] (0xc002ca4210) (0xc0016ce000) Stream removed, broadcasting: 3 I0516 22:07:56.898091 6 log.go:172] (0xc002ca4210) (0xc0016672c0) Stream removed, broadcasting: 5 May 16 22:07:56.898: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:07:56.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9826" for this suite. • [SLOW TEST:32.415 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3091,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:07:56.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 16 22:07:56.959: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 16 22:07:56.979: INFO: Waiting for terminating namespaces to be deleted... May 16 22:07:56.981: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 16 22:07:56.996: INFO: test-container-pod from pod-network-test-9826 started at 2020-05-16 22:07:48 +0000 UTC (1 container statuses recorded) May 16 22:07:56.996: INFO: Container webserver ready: true, restart count 0 May 16 22:07:56.996: INFO: netserver-0 from pod-network-test-9826 started at 2020-05-16 22:07:24 +0000 UTC (1 container statuses recorded) May 16 22:07:56.996: INFO: Container webserver ready: true, restart count 0 May 16 22:07:56.996: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 22:07:56.996: INFO: Container kube-proxy ready: true, restart count 0 May 16 22:07:56.996: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 22:07:56.996: INFO: Container kindnet-cni ready: true, restart count 0 May 16 22:07:56.996: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 16 22:07:57.026: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 22:07:57.026: INFO: Container kindnet-cni ready: true, restart count 0 May 16 22:07:57.026: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 16 22:07:57.026: INFO: Container kube-bench ready: false, restart count 0 May 16 22:07:57.026: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 22:07:57.026: INFO: Container kube-proxy ready: true, restart count 0 May 16 22:07:57.026: INFO: host-test-container-pod from pod-network-test-9826 started at 2020-05-16 22:07:48 +0000 UTC (1 container statuses recorded) May 16 22:07:57.026: INFO: Container agnhost ready: true, restart count 0 May 16 22:07:57.026: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 16 22:07:57.026: INFO: Container kube-hunter ready: false, restart count 0 May 16 22:07:57.026: INFO: netserver-1 from pod-network-test-9826 started at 2020-05-16 22:07:24 +0000 UTC (1 container statuses recorded) May 16 22:07:57.026: INFO: Container webserver ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160fa19a2d61ce17], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:07:58.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6391" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":198,"skipped":3097,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:07:58.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-6dd9f213-2b2d-443c-ab2d-303a9cabffc7 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-6dd9f213-2b2d-443c-ab2d-303a9cabffc7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:09:15.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8291" for this suite. • [SLOW TEST:77.245 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:09:15.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-e1e3d812-f5fd-4061-a9f4-eac6ebdc6aca [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:09:15.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2814" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":200,"skipped":3144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:09:15.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 16 22:09:15.458: INFO: Waiting up to 5m0s for pod "downward-api-7dab2b07-796c-49bb-b5a6-a72fac4e1093" in namespace "downward-api-2339" to be "success or failure" May 16 22:09:15.518: INFO: Pod "downward-api-7dab2b07-796c-49bb-b5a6-a72fac4e1093": Phase="Pending", Reason="", readiness=false. Elapsed: 59.889541ms May 16 22:09:17.527: INFO: Pod "downward-api-7dab2b07-796c-49bb-b5a6-a72fac4e1093": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069227514s May 16 22:09:19.531: INFO: Pod "downward-api-7dab2b07-796c-49bb-b5a6-a72fac4e1093": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073149101s STEP: Saw pod success May 16 22:09:19.531: INFO: Pod "downward-api-7dab2b07-796c-49bb-b5a6-a72fac4e1093" satisfied condition "success or failure" May 16 22:09:19.534: INFO: Trying to get logs from node jerma-worker2 pod downward-api-7dab2b07-796c-49bb-b5a6-a72fac4e1093 container dapi-container: STEP: delete the pod May 16 22:09:19.554: INFO: Waiting for pod downward-api-7dab2b07-796c-49bb-b5a6-a72fac4e1093 to disappear May 16 22:09:19.558: INFO: Pod downward-api-7dab2b07-796c-49bb-b5a6-a72fac4e1093 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:09:19.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2339" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3203,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:09:19.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 16 22:09:19.668: INFO: Waiting up to 5m0s for pod "var-expansion-9bdfdd0d-94af-43c1-9b21-2c21a57ed38a" in namespace "var-expansion-540" to be "success or failure" May 16 22:09:19.715: INFO: Pod "var-expansion-9bdfdd0d-94af-43c1-9b21-2c21a57ed38a": Phase="Pending", Reason="", readiness=false. Elapsed: 47.008889ms May 16 22:09:21.719: INFO: Pod "var-expansion-9bdfdd0d-94af-43c1-9b21-2c21a57ed38a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050682996s May 16 22:09:23.722: INFO: Pod "var-expansion-9bdfdd0d-94af-43c1-9b21-2c21a57ed38a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054670397s STEP: Saw pod success May 16 22:09:23.723: INFO: Pod "var-expansion-9bdfdd0d-94af-43c1-9b21-2c21a57ed38a" satisfied condition "success or failure" May 16 22:09:23.726: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-9bdfdd0d-94af-43c1-9b21-2c21a57ed38a container dapi-container: STEP: delete the pod May 16 22:09:23.745: INFO: Waiting for pod var-expansion-9bdfdd0d-94af-43c1-9b21-2c21a57ed38a to disappear May 16 22:09:23.749: INFO: Pod var-expansion-9bdfdd0d-94af-43c1-9b21-2c21a57ed38a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:09:23.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-540" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3204,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:09:23.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 22:09:24.339: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 22:09:26.446: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725263764, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725263764, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725263764, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725263764, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 22:09:29.650: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:09:29.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4878" for this suite. STEP: Destroying namespace "webhook-4878-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.163 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":203,"skipped":3238,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:09:29.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-a30373aa-8636-4a47-9039-b0e6e6d5ce88 STEP: Creating a pod to test consume configMaps May 16 22:09:29.982: INFO: Waiting up to 5m0s for pod "pod-configmaps-a957c033-aab9-41ea-b6df-93863b9dbe74" in namespace "configmap-3553" to be "success or failure" May 16 22:09:29.986: INFO: Pod "pod-configmaps-a957c033-aab9-41ea-b6df-93863b9dbe74": Phase="Pending", Reason="", readiness=false. Elapsed: 3.079152ms May 16 22:09:31.990: INFO: Pod "pod-configmaps-a957c033-aab9-41ea-b6df-93863b9dbe74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007551301s May 16 22:09:33.995: INFO: Pod "pod-configmaps-a957c033-aab9-41ea-b6df-93863b9dbe74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012089656s STEP: Saw pod success May 16 22:09:33.995: INFO: Pod "pod-configmaps-a957c033-aab9-41ea-b6df-93863b9dbe74" satisfied condition "success or failure" May 16 22:09:33.998: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-a957c033-aab9-41ea-b6df-93863b9dbe74 container configmap-volume-test: STEP: delete the pod May 16 22:09:34.023: INFO: Waiting for pod pod-configmaps-a957c033-aab9-41ea-b6df-93863b9dbe74 to disappear May 16 22:09:34.027: INFO: Pod pod-configmaps-a957c033-aab9-41ea-b6df-93863b9dbe74 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:09:34.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3553" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3301,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:09:34.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 16 22:09:38.326: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:09:38.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6971" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3304,"failed":0} ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:09:38.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 16 22:09:43.068: INFO: Successfully updated pod "pod-update-activedeadlineseconds-de36a2b3-d4a4-4255-94e6-c34174bf5f51" May 16 22:09:43.068: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-de36a2b3-d4a4-4255-94e6-c34174bf5f51" in namespace "pods-1217" to be "terminated due to deadline exceeded" May 16 22:09:43.099: INFO: Pod "pod-update-activedeadlineseconds-de36a2b3-d4a4-4255-94e6-c34174bf5f51": Phase="Running", Reason="", readiness=true. Elapsed: 30.709397ms May 16 22:09:45.103: INFO: Pod "pod-update-activedeadlineseconds-de36a2b3-d4a4-4255-94e6-c34174bf5f51": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.034698795s May 16 22:09:45.103: INFO: Pod "pod-update-activedeadlineseconds-de36a2b3-d4a4-4255-94e6-c34174bf5f51" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:09:45.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1217" for this suite. • [SLOW TEST:6.697 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3304,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:09:45.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 16 22:09:45.225: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:09:45.228: INFO: Number of nodes with available pods: 0 May 16 22:09:45.228: INFO: Node jerma-worker is running more than one daemon pod May 16 22:09:46.232: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:09:46.235: INFO: Number of nodes with available pods: 0 May 16 22:09:46.235: INFO: Node jerma-worker is running more than one daemon pod May 16 22:09:47.232: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:09:47.234: INFO: Number of nodes with available pods: 0 May 16 22:09:47.234: INFO: Node jerma-worker is running more than one daemon pod May 16 22:09:48.233: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:09:48.237: INFO: Number of nodes with available pods: 1 May 16 22:09:48.237: INFO: Node jerma-worker2 is running more than one daemon pod May 16 22:09:49.232: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:09:49.235: INFO: Number of nodes with available pods: 2 May 16 22:09:49.235: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 16 22:09:49.256: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:09:49.285: INFO: Number of nodes with available pods: 2 May 16 22:09:49.285: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7935, will wait for the garbage collector to delete the pods May 16 22:09:50.409: INFO: Deleting DaemonSet.extensions daemon-set took: 10.033323ms May 16 22:09:50.710: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.273977ms May 16 22:09:54.213: INFO: Number of nodes with available pods: 0 May 16 22:09:54.213: INFO: Number of running nodes: 0, number of available pods: 0 May 16 22:09:54.215: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7935/daemonsets","resourceVersion":"16749734"},"items":null} May 16 22:09:54.218: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7935/pods","resourceVersion":"16749734"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:09:54.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7935" for this suite. • [SLOW TEST:9.123 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":207,"skipped":3314,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:09:54.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-0867c144-6898-4cb9-8882-a46b30608198 STEP: Creating a pod to test consume configMaps May 16 22:09:54.366: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-37933493-8f3d-48cd-9bfc-da34fcc1e912" in namespace "projected-9569" to be "success or failure" May 16 22:09:54.376: INFO: Pod "pod-projected-configmaps-37933493-8f3d-48cd-9bfc-da34fcc1e912": Phase="Pending", Reason="", readiness=false. Elapsed: 9.637751ms May 16 22:09:56.380: INFO: Pod "pod-projected-configmaps-37933493-8f3d-48cd-9bfc-da34fcc1e912": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01399288s May 16 22:09:58.385: INFO: Pod "pod-projected-configmaps-37933493-8f3d-48cd-9bfc-da34fcc1e912": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018981204s STEP: Saw pod success May 16 22:09:58.385: INFO: Pod "pod-projected-configmaps-37933493-8f3d-48cd-9bfc-da34fcc1e912" satisfied condition "success or failure" May 16 22:09:58.388: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-37933493-8f3d-48cd-9bfc-da34fcc1e912 container projected-configmap-volume-test: STEP: delete the pod May 16 22:09:58.528: INFO: Waiting for pod pod-projected-configmaps-37933493-8f3d-48cd-9bfc-da34fcc1e912 to disappear May 16 22:09:58.680: INFO: Pod pod-projected-configmaps-37933493-8f3d-48cd-9bfc-da34fcc1e912 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:09:58.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9569" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3338,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:09:58.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-0b2db994-f14d-4ca9-8dfd-b7cfdc7a2df9 STEP: Creating a pod to test consume secrets May 16 22:09:58.759: INFO: Waiting up to 5m0s for pod "pod-secrets-7e1797b6-d06c-4a21-b30b-af7199237c33" in namespace "secrets-2471" to be "success or failure" May 16 22:09:58.775: INFO: Pod "pod-secrets-7e1797b6-d06c-4a21-b30b-af7199237c33": Phase="Pending", Reason="", readiness=false. Elapsed: 15.636674ms May 16 22:10:00.780: INFO: Pod "pod-secrets-7e1797b6-d06c-4a21-b30b-af7199237c33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020618883s May 16 22:10:02.784: INFO: Pod "pod-secrets-7e1797b6-d06c-4a21-b30b-af7199237c33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024835555s STEP: Saw pod success May 16 22:10:02.784: INFO: Pod "pod-secrets-7e1797b6-d06c-4a21-b30b-af7199237c33" satisfied condition "success or failure" May 16 22:10:02.788: INFO: Trying to get logs from node jerma-worker pod pod-secrets-7e1797b6-d06c-4a21-b30b-af7199237c33 container secret-volume-test: STEP: delete the pod May 16 22:10:02.822: INFO: Waiting for pod pod-secrets-7e1797b6-d06c-4a21-b30b-af7199237c33 to disappear May 16 22:10:02.853: INFO: Pod pod-secrets-7e1797b6-d06c-4a21-b30b-af7199237c33 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:10:02.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2471" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3345,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:10:02.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:10:02.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-699" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":210,"skipped":3346,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:10:02.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:10:07.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1090" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3351,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:10:07.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-z9xh STEP: Creating a pod to test atomic-volume-subpath May 16 22:10:07.240: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-z9xh" in namespace "subpath-9157" to be "success or failure" May 16 22:10:07.257: INFO: Pod "pod-subpath-test-secret-z9xh": Phase="Pending", Reason="", readiness=false. Elapsed: 16.948118ms May 16 22:10:09.275: INFO: Pod "pod-subpath-test-secret-z9xh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034344314s May 16 22:10:11.279: INFO: Pod "pod-subpath-test-secret-z9xh": Phase="Running", Reason="", readiness=true. Elapsed: 4.038138772s May 16 22:10:13.327: INFO: Pod "pod-subpath-test-secret-z9xh": Phase="Running", Reason="", readiness=true. Elapsed: 6.08642501s May 16 22:10:15.331: INFO: Pod "pod-subpath-test-secret-z9xh": Phase="Running", Reason="", readiness=true. Elapsed: 8.090735483s May 16 22:10:17.340: INFO: Pod "pod-subpath-test-secret-z9xh": Phase="Running", Reason="", readiness=true. Elapsed: 10.099569343s May 16 22:10:19.343: INFO: Pod "pod-subpath-test-secret-z9xh": Phase="Running", Reason="", readiness=true. Elapsed: 12.102981571s May 16 22:10:21.348: INFO: Pod "pod-subpath-test-secret-z9xh": Phase="Running", Reason="", readiness=true. Elapsed: 14.107303096s May 16 22:10:23.352: INFO: Pod "pod-subpath-test-secret-z9xh": Phase="Running", Reason="", readiness=true. Elapsed: 16.111347274s May 16 22:10:25.356: INFO: Pod "pod-subpath-test-secret-z9xh": Phase="Running", Reason="", readiness=true. Elapsed: 18.115768465s May 16 22:10:27.361: INFO: Pod "pod-subpath-test-secret-z9xh": Phase="Running", Reason="", readiness=true. Elapsed: 20.120516555s May 16 22:10:29.366: INFO: Pod "pod-subpath-test-secret-z9xh": Phase="Running", Reason="", readiness=true. Elapsed: 22.125767503s May 16 22:10:31.371: INFO: Pod "pod-subpath-test-secret-z9xh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.130297485s STEP: Saw pod success May 16 22:10:31.371: INFO: Pod "pod-subpath-test-secret-z9xh" satisfied condition "success or failure" May 16 22:10:31.374: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-z9xh container test-container-subpath-secret-z9xh: STEP: delete the pod May 16 22:10:31.581: INFO: Waiting for pod pod-subpath-test-secret-z9xh to disappear May 16 22:10:31.608: INFO: Pod pod-subpath-test-secret-z9xh no longer exists STEP: Deleting pod pod-subpath-test-secret-z9xh May 16 22:10:31.608: INFO: Deleting pod "pod-subpath-test-secret-z9xh" in namespace "subpath-9157" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:10:31.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9157" for this suite. • [SLOW TEST:24.573 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":212,"skipped":3370,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:10:31.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 16 22:10:31.808: INFO: Waiting up to 5m0s for pod "pod-9352c59e-71b5-4d78-a5e8-9d3d75eb6536" in namespace "emptydir-6629" to be "success or failure" May 16 22:10:31.866: INFO: Pod "pod-9352c59e-71b5-4d78-a5e8-9d3d75eb6536": Phase="Pending", Reason="", readiness=false. Elapsed: 57.815126ms May 16 22:10:33.871: INFO: Pod "pod-9352c59e-71b5-4d78-a5e8-9d3d75eb6536": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062621059s May 16 22:10:35.876: INFO: Pod "pod-9352c59e-71b5-4d78-a5e8-9d3d75eb6536": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067372329s STEP: Saw pod success May 16 22:10:35.876: INFO: Pod "pod-9352c59e-71b5-4d78-a5e8-9d3d75eb6536" satisfied condition "success or failure" May 16 22:10:35.879: INFO: Trying to get logs from node jerma-worker pod pod-9352c59e-71b5-4d78-a5e8-9d3d75eb6536 container test-container: STEP: delete the pod May 16 22:10:35.944: INFO: Waiting for pod pod-9352c59e-71b5-4d78-a5e8-9d3d75eb6536 to disappear May 16 22:10:35.947: INFO: Pod pod-9352c59e-71b5-4d78-a5e8-9d3d75eb6536 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:10:35.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6629" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:10:35.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-l8h5 STEP: Creating a pod to test atomic-volume-subpath May 16 22:10:36.035: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-l8h5" in namespace "subpath-1878" to be "success or failure" May 16 22:10:36.040: INFO: Pod "pod-subpath-test-projected-l8h5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.686778ms May 16 22:10:38.099: INFO: Pod "pod-subpath-test-projected-l8h5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063871206s May 16 22:10:40.104: INFO: Pod "pod-subpath-test-projected-l8h5": Phase="Running", Reason="", readiness=true. Elapsed: 4.068826811s May 16 22:10:42.109: INFO: Pod "pod-subpath-test-projected-l8h5": Phase="Running", Reason="", readiness=true. Elapsed: 6.074109439s May 16 22:10:44.124: INFO: Pod "pod-subpath-test-projected-l8h5": Phase="Running", Reason="", readiness=true. Elapsed: 8.088969867s May 16 22:10:46.129: INFO: Pod "pod-subpath-test-projected-l8h5": Phase="Running", Reason="", readiness=true. Elapsed: 10.093591408s May 16 22:10:48.132: INFO: Pod "pod-subpath-test-projected-l8h5": Phase="Running", Reason="", readiness=true. Elapsed: 12.097199903s May 16 22:10:50.160: INFO: Pod "pod-subpath-test-projected-l8h5": Phase="Running", Reason="", readiness=true. Elapsed: 14.124873503s May 16 22:10:52.164: INFO: Pod "pod-subpath-test-projected-l8h5": Phase="Running", Reason="", readiness=true. Elapsed: 16.128859957s May 16 22:10:54.169: INFO: Pod "pod-subpath-test-projected-l8h5": Phase="Running", Reason="", readiness=true. Elapsed: 18.133658844s May 16 22:10:56.173: INFO: Pod "pod-subpath-test-projected-l8h5": Phase="Running", Reason="", readiness=true. Elapsed: 20.138411412s May 16 22:10:58.178: INFO: Pod "pod-subpath-test-projected-l8h5": Phase="Running", Reason="", readiness=true. Elapsed: 22.14271069s May 16 22:11:00.181: INFO: Pod "pod-subpath-test-projected-l8h5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.146312891s STEP: Saw pod success May 16 22:11:00.181: INFO: Pod "pod-subpath-test-projected-l8h5" satisfied condition "success or failure" May 16 22:11:00.184: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-l8h5 container test-container-subpath-projected-l8h5: STEP: delete the pod May 16 22:11:00.239: INFO: Waiting for pod pod-subpath-test-projected-l8h5 to disappear May 16 22:11:00.257: INFO: Pod pod-subpath-test-projected-l8h5 no longer exists STEP: Deleting pod pod-subpath-test-projected-l8h5 May 16 22:11:00.257: INFO: Deleting pod "pod-subpath-test-projected-l8h5" in namespace "subpath-1878" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:11:00.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1878" for this suite. • [SLOW TEST:24.337 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":214,"skipped":3427,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:11:00.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-c9c68729-1317-4285-bc43-bdf2bf0ab28c STEP: Creating a pod to test consume configMaps May 16 22:11:00.374: INFO: Waiting up to 5m0s for pod "pod-configmaps-e8032166-cc98-46b0-99e6-df20310f65b0" in namespace "configmap-4613" to be "success or failure" May 16 22:11:00.387: INFO: Pod "pod-configmaps-e8032166-cc98-46b0-99e6-df20310f65b0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.91901ms May 16 22:11:02.429: INFO: Pod "pod-configmaps-e8032166-cc98-46b0-99e6-df20310f65b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055402202s May 16 22:11:04.433: INFO: Pod "pod-configmaps-e8032166-cc98-46b0-99e6-df20310f65b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059108467s STEP: Saw pod success May 16 22:11:04.433: INFO: Pod "pod-configmaps-e8032166-cc98-46b0-99e6-df20310f65b0" satisfied condition "success or failure" May 16 22:11:04.435: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-e8032166-cc98-46b0-99e6-df20310f65b0 container configmap-volume-test: STEP: delete the pod May 16 22:11:04.479: INFO: Waiting for pod pod-configmaps-e8032166-cc98-46b0-99e6-df20310f65b0 to disappear May 16 22:11:04.501: INFO: Pod pod-configmaps-e8032166-cc98-46b0-99e6-df20310f65b0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:11:04.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4613" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3435,"failed":0} S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:11:04.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 16 22:11:04.619: INFO: Waiting up to 5m0s for pod "client-containers-c10cbe60-eb50-4a10-8b25-c10c0ae564ee" in namespace "containers-7730" to be "success or failure" May 16 22:11:04.634: INFO: Pod "client-containers-c10cbe60-eb50-4a10-8b25-c10c0ae564ee": Phase="Pending", Reason="", readiness=false. Elapsed: 14.972691ms May 16 22:11:06.657: INFO: Pod "client-containers-c10cbe60-eb50-4a10-8b25-c10c0ae564ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03849271s May 16 22:11:08.660: INFO: Pod "client-containers-c10cbe60-eb50-4a10-8b25-c10c0ae564ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04171566s STEP: Saw pod success May 16 22:11:08.660: INFO: Pod "client-containers-c10cbe60-eb50-4a10-8b25-c10c0ae564ee" satisfied condition "success or failure" May 16 22:11:08.663: INFO: Trying to get logs from node jerma-worker2 pod client-containers-c10cbe60-eb50-4a10-8b25-c10c0ae564ee container test-container: STEP: delete the pod May 16 22:11:08.732: INFO: Waiting for pod client-containers-c10cbe60-eb50-4a10-8b25-c10c0ae564ee to disappear May 16 22:11:08.747: INFO: Pod client-containers-c10cbe60-eb50-4a10-8b25-c10c0ae564ee no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:11:08.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7730" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3436,"failed":0} ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:11:08.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 16 22:11:12.886: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 16 22:11:23.027: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:11:23.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6695" for this suite. • [SLOW TEST:14.325 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":217,"skipped":3436,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:11:23.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-6a89e208-a0d5-472b-8600-ab312651cda0 STEP: Creating a pod to test consume configMaps May 16 22:11:23.236: INFO: Waiting up to 5m0s for pod "pod-configmaps-8feafbbd-f5b8-41c4-8987-d7f1b8115b59" in namespace "configmap-8549" to be "success or failure" May 16 22:11:23.257: INFO: Pod "pod-configmaps-8feafbbd-f5b8-41c4-8987-d7f1b8115b59": Phase="Pending", Reason="", readiness=false. Elapsed: 21.305662ms May 16 22:11:25.262: INFO: Pod "pod-configmaps-8feafbbd-f5b8-41c4-8987-d7f1b8115b59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025383576s May 16 22:11:27.266: INFO: Pod "pod-configmaps-8feafbbd-f5b8-41c4-8987-d7f1b8115b59": Phase="Running", Reason="", readiness=true. Elapsed: 4.03029242s May 16 22:11:29.270: INFO: Pod "pod-configmaps-8feafbbd-f5b8-41c4-8987-d7f1b8115b59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033947782s STEP: Saw pod success May 16 22:11:29.270: INFO: Pod "pod-configmaps-8feafbbd-f5b8-41c4-8987-d7f1b8115b59" satisfied condition "success or failure" May 16 22:11:29.273: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-8feafbbd-f5b8-41c4-8987-d7f1b8115b59 container configmap-volume-test: STEP: delete the pod May 16 22:11:29.289: INFO: Waiting for pod pod-configmaps-8feafbbd-f5b8-41c4-8987-d7f1b8115b59 to disappear May 16 22:11:29.309: INFO: Pod pod-configmaps-8feafbbd-f5b8-41c4-8987-d7f1b8115b59 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:11:29.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8549" for this suite. • [SLOW TEST:6.237 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3437,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:11:29.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:11:33.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6059" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3456,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:11:33.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 16 22:11:41.411: INFO: 9 pods remaining May 16 22:11:41.411: INFO: 0 pods has nil DeletionTimestamp May 16 22:11:41.411: INFO: STEP: Gathering metrics W0516 22:11:42.682011 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 16 22:11:42.682: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:11:42.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4116" for this suite. • [SLOW TEST:9.258 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":220,"skipped":3457,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:11:42.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:11:43.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2292" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3472,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:11:43.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 16 22:11:44.350: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 16 22:11:54.733: INFO: >>> kubeConfig: /root/.kube/config May 16 22:11:57.678: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:12:08.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3314" for this suite. • [SLOW TEST:24.478 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":222,"skipped":3493,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:12:08.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:12:13.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5981" for this suite. • [SLOW TEST:5.126 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":223,"skipped":3512,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:12:13.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 16 22:12:13.572: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2238 /api/v1/namespaces/watch-2238/configmaps/e2e-watch-test-configmap-a 82a551fc-66c8-42d2-8a23-423b3eadd64e 16750682 0 2020-05-16 22:12:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 16 22:12:13.572: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2238 /api/v1/namespaces/watch-2238/configmaps/e2e-watch-test-configmap-a 82a551fc-66c8-42d2-8a23-423b3eadd64e 16750682 0 2020-05-16 22:12:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 16 22:12:23.636: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2238 /api/v1/namespaces/watch-2238/configmaps/e2e-watch-test-configmap-a 82a551fc-66c8-42d2-8a23-423b3eadd64e 16750731 0 2020-05-16 22:12:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 16 22:12:23.636: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2238 /api/v1/namespaces/watch-2238/configmaps/e2e-watch-test-configmap-a 82a551fc-66c8-42d2-8a23-423b3eadd64e 16750731 0 2020-05-16 22:12:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 16 22:12:33.646: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2238 /api/v1/namespaces/watch-2238/configmaps/e2e-watch-test-configmap-a 82a551fc-66c8-42d2-8a23-423b3eadd64e 16750762 0 2020-05-16 22:12:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 16 22:12:33.646: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2238 /api/v1/namespaces/watch-2238/configmaps/e2e-watch-test-configmap-a 82a551fc-66c8-42d2-8a23-423b3eadd64e 16750762 0 2020-05-16 22:12:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 16 22:12:43.654: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2238 /api/v1/namespaces/watch-2238/configmaps/e2e-watch-test-configmap-a 82a551fc-66c8-42d2-8a23-423b3eadd64e 16750794 0 2020-05-16 22:12:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 16 22:12:43.654: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2238 /api/v1/namespaces/watch-2238/configmaps/e2e-watch-test-configmap-a 82a551fc-66c8-42d2-8a23-423b3eadd64e 16750794 0 2020-05-16 22:12:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 16 22:12:53.664: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2238 /api/v1/namespaces/watch-2238/configmaps/e2e-watch-test-configmap-b aa402147-2f8c-4a72-9d0c-2f5efdf95c97 16750824 0 2020-05-16 22:12:53 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 16 22:12:53.664: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2238 /api/v1/namespaces/watch-2238/configmaps/e2e-watch-test-configmap-b aa402147-2f8c-4a72-9d0c-2f5efdf95c97 16750824 0 2020-05-16 22:12:53 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 16 22:13:03.670: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2238 /api/v1/namespaces/watch-2238/configmaps/e2e-watch-test-configmap-b aa402147-2f8c-4a72-9d0c-2f5efdf95c97 16750854 0 2020-05-16 22:12:53 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 16 22:13:03.670: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2238 /api/v1/namespaces/watch-2238/configmaps/e2e-watch-test-configmap-b aa402147-2f8c-4a72-9d0c-2f5efdf95c97 16750854 0 2020-05-16 22:12:53 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:13:13.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2238" for this suite. • [SLOW TEST:60.394 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":224,"skipped":3517,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:13:13.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 16 22:13:13.750: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ba66512-b2bf-4742-bde9-6d15948338d9" in namespace "projected-8073" to be "success or failure" May 16 22:13:13.762: INFO: Pod "downwardapi-volume-4ba66512-b2bf-4742-bde9-6d15948338d9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.671136ms May 16 22:13:15.766: INFO: Pod "downwardapi-volume-4ba66512-b2bf-4742-bde9-6d15948338d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016131354s May 16 22:13:17.771: INFO: Pod "downwardapi-volume-4ba66512-b2bf-4742-bde9-6d15948338d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02036525s STEP: Saw pod success May 16 22:13:17.771: INFO: Pod "downwardapi-volume-4ba66512-b2bf-4742-bde9-6d15948338d9" satisfied condition "success or failure" May 16 22:13:17.774: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-4ba66512-b2bf-4742-bde9-6d15948338d9 container client-container: STEP: delete the pod May 16 22:13:17.836: INFO: Waiting for pod downwardapi-volume-4ba66512-b2bf-4742-bde9-6d15948338d9 to disappear May 16 22:13:17.946: INFO: Pod downwardapi-volume-4ba66512-b2bf-4742-bde9-6d15948338d9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:13:17.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8073" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3524,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:13:17.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 16 22:13:18.038: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6f96565-ca04-4993-be21-795463ad921e" in namespace "downward-api-3983" to be "success or failure" May 16 22:13:18.078: INFO: Pod "downwardapi-volume-c6f96565-ca04-4993-be21-795463ad921e": Phase="Pending", Reason="", readiness=false. Elapsed: 39.6238ms May 16 22:13:20.083: INFO: Pod "downwardapi-volume-c6f96565-ca04-4993-be21-795463ad921e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044449466s May 16 22:13:22.088: INFO: Pod "downwardapi-volume-c6f96565-ca04-4993-be21-795463ad921e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049361229s STEP: Saw pod success May 16 22:13:22.088: INFO: Pod "downwardapi-volume-c6f96565-ca04-4993-be21-795463ad921e" satisfied condition "success or failure" May 16 22:13:22.092: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-c6f96565-ca04-4993-be21-795463ad921e container client-container: STEP: delete the pod May 16 22:13:22.173: INFO: Waiting for pod downwardapi-volume-c6f96565-ca04-4993-be21-795463ad921e to disappear May 16 22:13:22.188: INFO: Pod downwardapi-volume-c6f96565-ca04-4993-be21-795463ad921e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:13:22.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3983" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3525,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:13:22.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-48730032-9ad4-423c-82b5-c7881315f319 STEP: Creating a pod to test consume secrets May 16 22:13:22.378: INFO: Waiting up to 5m0s for pod "pod-secrets-ad545d89-5d86-41f8-84ac-7d35d1936223" in namespace "secrets-3598" to be "success or failure" May 16 22:13:22.416: INFO: Pod "pod-secrets-ad545d89-5d86-41f8-84ac-7d35d1936223": Phase="Pending", Reason="", readiness=false. Elapsed: 38.130556ms May 16 22:13:24.420: INFO: Pod "pod-secrets-ad545d89-5d86-41f8-84ac-7d35d1936223": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041975409s May 16 22:13:26.424: INFO: Pod "pod-secrets-ad545d89-5d86-41f8-84ac-7d35d1936223": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046263434s STEP: Saw pod success May 16 22:13:26.424: INFO: Pod "pod-secrets-ad545d89-5d86-41f8-84ac-7d35d1936223" satisfied condition "success or failure" May 16 22:13:26.428: INFO: Trying to get logs from node jerma-worker pod pod-secrets-ad545d89-5d86-41f8-84ac-7d35d1936223 container secret-volume-test: STEP: delete the pod May 16 22:13:26.487: INFO: Waiting for pod pod-secrets-ad545d89-5d86-41f8-84ac-7d35d1936223 to disappear May 16 22:13:26.557: INFO: Pod pod-secrets-ad545d89-5d86-41f8-84ac-7d35d1936223 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:13:26.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3598" for this suite. STEP: Destroying namespace "secret-namespace-3804" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3542,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:13:26.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 16 22:13:26.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6427' May 16 22:13:30.237: INFO: stderr: "" May 16 22:13:30.237: INFO: stdout: "pod/pause created\n" May 16 22:13:30.237: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 16 22:13:30.237: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6427" to be "running and ready" May 16 22:13:30.261: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 23.843281ms May 16 22:13:32.444: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206684044s May 16 22:13:34.448: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.210606953s May 16 22:13:34.448: INFO: Pod "pause" satisfied condition "running and ready" May 16 22:13:34.448: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 16 22:13:34.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6427' May 16 22:13:34.557: INFO: stderr: "" May 16 22:13:34.557: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 16 22:13:34.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6427' May 16 22:13:34.644: INFO: stderr: "" May 16 22:13:34.644: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 16 22:13:34.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6427' May 16 22:13:34.745: INFO: stderr: "" May 16 22:13:34.745: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 16 22:13:34.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6427' May 16 22:13:34.847: INFO: stderr: "" May 16 22:13:34.847: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 16 22:13:34.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6427' May 16 22:13:34.978: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 22:13:34.978: INFO: stdout: "pod \"pause\" force deleted\n" May 16 22:13:34.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6427' May 16 22:13:35.157: INFO: stderr: "No resources found in kubectl-6427 namespace.\n" May 16 22:13:35.157: INFO: stdout: "" May 16 22:13:35.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6427 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 16 22:13:35.246: INFO: stderr: "" May 16 22:13:35.246: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:13:35.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6427" for this suite. • [SLOW TEST:8.674 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":228,"skipped":3551,"failed":0} SSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:13:35.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-8829 STEP: creating replication controller nodeport-test in namespace services-8829 I0516 22:13:35.571963 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-8829, replica count: 2 I0516 22:13:38.622580 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 22:13:41.622807 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 16 22:13:41.622: INFO: Creating new exec pod May 16 22:13:46.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8829 execpodmq2w5 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 16 22:13:46.904: INFO: stderr: "I0516 22:13:46.799719 2719 log.go:172] (0xc0000f4b00) (0xc0008f2000) Create stream\nI0516 22:13:46.799773 2719 log.go:172] (0xc0000f4b00) (0xc0008f2000) Stream added, broadcasting: 1\nI0516 22:13:46.802613 2719 log.go:172] (0xc0000f4b00) Reply frame received for 1\nI0516 22:13:46.802668 2719 log.go:172] (0xc0000f4b00) (0xc000a2c000) Create stream\nI0516 22:13:46.802706 2719 log.go:172] (0xc0000f4b00) (0xc000a2c000) Stream added, broadcasting: 3\nI0516 22:13:46.803708 2719 log.go:172] (0xc0000f4b00) Reply frame received for 3\nI0516 22:13:46.803745 2719 log.go:172] (0xc0000f4b00) (0xc0008f20a0) Create stream\nI0516 22:13:46.803757 2719 log.go:172] (0xc0000f4b00) (0xc0008f20a0) Stream added, broadcasting: 5\nI0516 22:13:46.804675 2719 log.go:172] (0xc0000f4b00) Reply frame received for 5\nI0516 22:13:46.895810 2719 log.go:172] (0xc0000f4b00) Data frame received for 3\nI0516 22:13:46.895849 2719 log.go:172] (0xc000a2c000) (3) Data frame handling\nI0516 22:13:46.895878 2719 log.go:172] (0xc0000f4b00) Data frame received for 5\nI0516 22:13:46.895918 2719 log.go:172] (0xc0008f20a0) (5) Data frame handling\nI0516 22:13:46.895968 2719 log.go:172] (0xc0008f20a0) (5) Data frame sent\nI0516 22:13:46.895987 2719 log.go:172] (0xc0000f4b00) Data frame received for 5\nI0516 22:13:46.895997 2719 log.go:172] (0xc0008f20a0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0516 22:13:46.897839 2719 log.go:172] (0xc0000f4b00) Data frame received for 1\nI0516 22:13:46.897864 2719 log.go:172] (0xc0008f2000) (1) Data frame handling\nI0516 22:13:46.897880 2719 log.go:172] (0xc0008f2000) (1) Data frame sent\nI0516 22:13:46.897890 2719 log.go:172] (0xc0000f4b00) (0xc0008f2000) Stream removed, broadcasting: 1\nI0516 22:13:46.898069 2719 log.go:172] (0xc0000f4b00) Go away received\nI0516 22:13:46.898229 2719 log.go:172] (0xc0000f4b00) (0xc0008f2000) Stream removed, broadcasting: 1\nI0516 22:13:46.898252 2719 log.go:172] (0xc0000f4b00) (0xc000a2c000) Stream removed, broadcasting: 3\nI0516 22:13:46.898262 2719 log.go:172] (0xc0000f4b00) (0xc0008f20a0) Stream removed, broadcasting: 5\n" May 16 22:13:46.904: INFO: stdout: "" May 16 22:13:46.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8829 execpodmq2w5 -- /bin/sh -x -c nc -zv -t -w 2 10.96.116.31 80' May 16 22:13:47.169: INFO: stderr: "I0516 22:13:47.081968 2741 log.go:172] (0xc000ade580) (0xc000b1c000) Create stream\nI0516 22:13:47.082042 2741 log.go:172] (0xc000ade580) (0xc000b1c000) Stream added, broadcasting: 1\nI0516 22:13:47.084812 2741 log.go:172] (0xc000ade580) Reply frame received for 1\nI0516 22:13:47.084855 2741 log.go:172] (0xc000ade580) (0xc0006f9a40) Create stream\nI0516 22:13:47.084868 2741 log.go:172] (0xc000ade580) (0xc0006f9a40) Stream added, broadcasting: 3\nI0516 22:13:47.086188 2741 log.go:172] (0xc000ade580) Reply frame received for 3\nI0516 22:13:47.086234 2741 log.go:172] (0xc000ade580) (0xc0006f9c20) Create stream\nI0516 22:13:47.086251 2741 log.go:172] (0xc000ade580) (0xc0006f9c20) Stream added, broadcasting: 5\nI0516 22:13:47.087405 2741 log.go:172] (0xc000ade580) Reply frame received for 5\nI0516 22:13:47.163819 2741 log.go:172] (0xc000ade580) Data frame received for 5\nI0516 22:13:47.163856 2741 log.go:172] (0xc0006f9c20) (5) Data frame handling\nI0516 22:13:47.163868 2741 log.go:172] (0xc0006f9c20) (5) Data frame sent\nI0516 22:13:47.163876 2741 log.go:172] (0xc000ade580) Data frame received for 5\nI0516 22:13:47.163884 2741 log.go:172] (0xc0006f9c20) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.116.31 80\nConnection to 10.96.116.31 80 port [tcp/http] succeeded!\nI0516 22:13:47.163903 2741 log.go:172] (0xc000ade580) Data frame received for 3\nI0516 22:13:47.163910 2741 log.go:172] (0xc0006f9a40) (3) Data frame handling\nI0516 22:13:47.164683 2741 log.go:172] (0xc000ade580) Data frame received for 1\nI0516 22:13:47.164714 2741 log.go:172] (0xc000b1c000) (1) Data frame handling\nI0516 22:13:47.164866 2741 log.go:172] (0xc000b1c000) (1) Data frame sent\nI0516 22:13:47.164888 2741 log.go:172] (0xc000ade580) (0xc000b1c000) Stream removed, broadcasting: 1\nI0516 22:13:47.164905 2741 log.go:172] (0xc000ade580) Go away received\nI0516 22:13:47.165413 2741 log.go:172] (0xc000ade580) (0xc000b1c000) Stream removed, broadcasting: 1\nI0516 22:13:47.165432 2741 log.go:172] (0xc000ade580) (0xc0006f9a40) Stream removed, broadcasting: 3\nI0516 22:13:47.165441 2741 log.go:172] (0xc000ade580) (0xc0006f9c20) Stream removed, broadcasting: 5\n" May 16 22:13:47.170: INFO: stdout: "" May 16 22:13:47.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8829 execpodmq2w5 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32400' May 16 22:13:47.380: INFO: stderr: "I0516 22:13:47.304278 2760 log.go:172] (0xc000a94000) (0xc0005646e0) Create stream\nI0516 22:13:47.304338 2760 log.go:172] (0xc000a94000) (0xc0005646e0) Stream added, broadcasting: 1\nI0516 22:13:47.306392 2760 log.go:172] (0xc000a94000) Reply frame received for 1\nI0516 22:13:47.306427 2760 log.go:172] (0xc000a94000) (0xc0007d8dc0) Create stream\nI0516 22:13:47.306436 2760 log.go:172] (0xc000a94000) (0xc0007d8dc0) Stream added, broadcasting: 3\nI0516 22:13:47.307359 2760 log.go:172] (0xc000a94000) Reply frame received for 3\nI0516 22:13:47.307384 2760 log.go:172] (0xc000a94000) (0xc000603ae0) Create stream\nI0516 22:13:47.307392 2760 log.go:172] (0xc000a94000) (0xc000603ae0) Stream added, broadcasting: 5\nI0516 22:13:47.308400 2760 log.go:172] (0xc000a94000) Reply frame received for 5\nI0516 22:13:47.363384 2760 log.go:172] (0xc000a94000) Data frame received for 3\nI0516 22:13:47.363408 2760 log.go:172] (0xc0007d8dc0) (3) Data frame handling\nI0516 22:13:47.363435 2760 log.go:172] (0xc000a94000) Data frame received for 5\nI0516 22:13:47.363453 2760 log.go:172] (0xc000603ae0) (5) Data frame handling\nI0516 22:13:47.363471 2760 log.go:172] (0xc000603ae0) (5) Data frame sent\nI0516 22:13:47.363481 2760 log.go:172] (0xc000a94000) Data frame received for 5\nI0516 22:13:47.363490 2760 log.go:172] (0xc000603ae0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 32400\nConnection to 172.17.0.10 32400 port [tcp/32400] succeeded!\nI0516 22:13:47.364890 2760 log.go:172] (0xc000a94000) Data frame received for 1\nI0516 22:13:47.364906 2760 log.go:172] (0xc0005646e0) (1) Data frame handling\nI0516 22:13:47.364918 2760 log.go:172] (0xc0005646e0) (1) Data frame sent\nI0516 22:13:47.364983 2760 log.go:172] (0xc000a94000) (0xc0005646e0) Stream removed, broadcasting: 1\nI0516 22:13:47.365416 2760 log.go:172] (0xc000a94000) Go away received\nI0516 22:13:47.365463 2760 log.go:172] (0xc000a94000) (0xc0005646e0) Stream removed, broadcasting: 1\nI0516 22:13:47.365477 2760 log.go:172] (0xc000a94000) (0xc0007d8dc0) Stream removed, broadcasting: 3\nI0516 22:13:47.365486 2760 log.go:172] (0xc000a94000) (0xc000603ae0) Stream removed, broadcasting: 5\n" May 16 22:13:47.380: INFO: stdout: "" May 16 22:13:47.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8829 execpodmq2w5 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32400' May 16 22:13:47.570: INFO: stderr: "I0516 22:13:47.508757 2781 log.go:172] (0xc0000ed340) (0xc00059bc20) Create stream\nI0516 22:13:47.508824 2781 log.go:172] (0xc0000ed340) (0xc00059bc20) Stream added, broadcasting: 1\nI0516 22:13:47.512127 2781 log.go:172] (0xc0000ed340) Reply frame received for 1\nI0516 22:13:47.512158 2781 log.go:172] (0xc0000ed340) (0xc000833f40) Create stream\nI0516 22:13:47.512166 2781 log.go:172] (0xc0000ed340) (0xc000833f40) Stream added, broadcasting: 3\nI0516 22:13:47.512866 2781 log.go:172] (0xc0000ed340) Reply frame received for 3\nI0516 22:13:47.512918 2781 log.go:172] (0xc0000ed340) (0xc000166640) Create stream\nI0516 22:13:47.512933 2781 log.go:172] (0xc0000ed340) (0xc000166640) Stream added, broadcasting: 5\nI0516 22:13:47.513840 2781 log.go:172] (0xc0000ed340) Reply frame received for 5\nI0516 22:13:47.564804 2781 log.go:172] (0xc0000ed340) Data frame received for 3\nI0516 22:13:47.564824 2781 log.go:172] (0xc000833f40) (3) Data frame handling\nI0516 22:13:47.564883 2781 log.go:172] (0xc0000ed340) Data frame received for 5\nI0516 22:13:47.564918 2781 log.go:172] (0xc000166640) (5) Data frame handling\nI0516 22:13:47.564941 2781 log.go:172] (0xc000166640) (5) Data frame sent\nI0516 22:13:47.564954 2781 log.go:172] (0xc0000ed340) Data frame received for 5\nI0516 22:13:47.564972 2781 log.go:172] (0xc000166640) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 32400\nConnection to 172.17.0.8 32400 port [tcp/32400] succeeded!\nI0516 22:13:47.566117 2781 log.go:172] (0xc0000ed340) Data frame received for 1\nI0516 22:13:47.566143 2781 log.go:172] (0xc00059bc20) (1) Data frame handling\nI0516 22:13:47.566169 2781 log.go:172] (0xc00059bc20) (1) Data frame sent\nI0516 22:13:47.566191 2781 log.go:172] (0xc0000ed340) (0xc00059bc20) Stream removed, broadcasting: 1\nI0516 22:13:47.566261 2781 log.go:172] (0xc0000ed340) Go away received\nI0516 22:13:47.566601 2781 log.go:172] (0xc0000ed340) (0xc00059bc20) Stream removed, broadcasting: 1\nI0516 22:13:47.566623 2781 log.go:172] (0xc0000ed340) (0xc000833f40) Stream removed, broadcasting: 3\nI0516 22:13:47.566636 2781 log.go:172] (0xc0000ed340) (0xc000166640) Stream removed, broadcasting: 5\n" May 16 22:13:47.570: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:13:47.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8829" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.324 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":229,"skipped":3554,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:13:47.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 22:13:48.208: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 22:13:50.221: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264028, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264028, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264028, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264028, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 22:13:53.283: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 16 22:13:57.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-164 to-be-attached-pod -i -c=container1' May 16 22:13:57.600: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:13:57.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-164" for this suite. STEP: Destroying namespace "webhook-164-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.154 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":230,"skipped":3569,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:13:57.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 16 22:13:57.829: INFO: Waiting up to 5m0s for pod "pod-83c65601-df4e-4fbc-be88-823fb12b534c" in namespace "emptydir-7696" to be "success or failure" May 16 22:13:57.842: INFO: Pod "pod-83c65601-df4e-4fbc-be88-823fb12b534c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.693099ms May 16 22:13:59.851: INFO: Pod "pod-83c65601-df4e-4fbc-be88-823fb12b534c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021302057s May 16 22:14:01.855: INFO: Pod "pod-83c65601-df4e-4fbc-be88-823fb12b534c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025226844s STEP: Saw pod success May 16 22:14:01.855: INFO: Pod "pod-83c65601-df4e-4fbc-be88-823fb12b534c" satisfied condition "success or failure" May 16 22:14:01.858: INFO: Trying to get logs from node jerma-worker2 pod pod-83c65601-df4e-4fbc-be88-823fb12b534c container test-container: STEP: delete the pod May 16 22:14:01.906: INFO: Waiting for pod pod-83c65601-df4e-4fbc-be88-823fb12b534c to disappear May 16 22:14:01.909: INFO: Pod pod-83c65601-df4e-4fbc-be88-823fb12b534c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:14:01.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7696" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3575,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:14:01.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-f0857eea-6491-4e7b-934a-97646e4cc289 STEP: Creating a pod to test consume secrets May 16 22:14:01.994: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-eb3922fa-6531-487b-b443-768b6b6183c0" in namespace "projected-6487" to be "success or failure" May 16 22:14:02.272: INFO: Pod "pod-projected-secrets-eb3922fa-6531-487b-b443-768b6b6183c0": Phase="Pending", Reason="", readiness=false. Elapsed: 277.621172ms May 16 22:14:04.282: INFO: Pod "pod-projected-secrets-eb3922fa-6531-487b-b443-768b6b6183c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287950948s May 16 22:14:06.286: INFO: Pod "pod-projected-secrets-eb3922fa-6531-487b-b443-768b6b6183c0": Phase="Running", Reason="", readiness=true. Elapsed: 4.29198835s May 16 22:14:08.295: INFO: Pod "pod-projected-secrets-eb3922fa-6531-487b-b443-768b6b6183c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.301145563s STEP: Saw pod success May 16 22:14:08.295: INFO: Pod "pod-projected-secrets-eb3922fa-6531-487b-b443-768b6b6183c0" satisfied condition "success or failure" May 16 22:14:08.298: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-eb3922fa-6531-487b-b443-768b6b6183c0 container projected-secret-volume-test: STEP: delete the pod May 16 22:14:08.316: INFO: Waiting for pod pod-projected-secrets-eb3922fa-6531-487b-b443-768b6b6183c0 to disappear May 16 22:14:08.342: INFO: Pod pod-projected-secrets-eb3922fa-6531-487b-b443-768b6b6183c0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:14:08.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6487" for this suite. • [SLOW TEST:6.434 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3654,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:14:08.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 16 22:14:16.456: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 22:14:16.460: INFO: Pod pod-with-poststart-exec-hook still exists May 16 22:14:18.460: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 22:14:18.464: INFO: Pod pod-with-poststart-exec-hook still exists May 16 22:14:20.460: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 22:14:20.463: INFO: Pod pod-with-poststart-exec-hook still exists May 16 22:14:22.460: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 22:14:22.464: INFO: Pod pod-with-poststart-exec-hook still exists May 16 22:14:24.460: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 22:14:24.464: INFO: Pod pod-with-poststart-exec-hook still exists May 16 22:14:26.460: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 22:14:26.465: INFO: Pod pod-with-poststart-exec-hook still exists May 16 22:14:28.460: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 22:14:28.481: INFO: Pod pod-with-poststart-exec-hook still exists May 16 22:14:30.460: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 22:14:30.464: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:14:30.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9056" for this suite. • [SLOW TEST:22.121 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3667,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:14:30.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 16 22:14:31.011: INFO: Waiting up to 5m0s for pod "pod-05446562-cb21-49b9-b765-4099801666b5" in namespace "emptydir-5481" to be "success or failure" May 16 22:14:31.046: INFO: Pod "pod-05446562-cb21-49b9-b765-4099801666b5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.626055ms May 16 22:14:33.050: INFO: Pod "pod-05446562-cb21-49b9-b765-4099801666b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03841222s May 16 22:14:35.720: INFO: Pod "pod-05446562-cb21-49b9-b765-4099801666b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.708105357s May 16 22:14:37.771: INFO: Pod "pod-05446562-cb21-49b9-b765-4099801666b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.759257924s May 16 22:14:39.776: INFO: Pod "pod-05446562-cb21-49b9-b765-4099801666b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.764012315s STEP: Saw pod success May 16 22:14:39.776: INFO: Pod "pod-05446562-cb21-49b9-b765-4099801666b5" satisfied condition "success or failure" May 16 22:14:39.779: INFO: Trying to get logs from node jerma-worker2 pod pod-05446562-cb21-49b9-b765-4099801666b5 container test-container: STEP: delete the pod May 16 22:14:39.802: INFO: Waiting for pod pod-05446562-cb21-49b9-b765-4099801666b5 to disappear May 16 22:14:39.812: INFO: Pod pod-05446562-cb21-49b9-b765-4099801666b5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:14:39.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5481" for this suite. • [SLOW TEST:9.352 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3703,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:14:39.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-5884 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5884 to expose endpoints map[] May 16 22:14:40.005: INFO: Get endpoints failed (18.144991ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 16 22:14:41.009: INFO: successfully validated that service multi-endpoint-test in namespace services-5884 exposes endpoints map[] (1.021844794s elapsed) STEP: Creating pod pod1 in namespace services-5884 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5884 to expose endpoints map[pod1:[100]] May 16 22:14:45.202: INFO: successfully validated that service multi-endpoint-test in namespace services-5884 exposes endpoints map[pod1:[100]] (4.186305903s elapsed) STEP: Creating pod pod2 in namespace services-5884 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5884 to expose endpoints map[pod1:[100] pod2:[101]] May 16 22:14:49.429: INFO: successfully validated that service multi-endpoint-test in namespace services-5884 exposes endpoints map[pod1:[100] pod2:[101]] (4.221342108s elapsed) STEP: Deleting pod pod1 in namespace services-5884 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5884 to expose endpoints map[pod2:[101]] May 16 22:14:50.471: INFO: successfully validated that service multi-endpoint-test in namespace services-5884 exposes endpoints map[pod2:[101]] (1.037584889s elapsed) STEP: Deleting pod pod2 in namespace services-5884 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5884 to expose endpoints map[] May 16 22:14:51.502: INFO: successfully validated that service multi-endpoint-test in namespace services-5884 exposes endpoints map[] (1.027579216s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:14:51.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5884" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.719 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":235,"skipped":3738,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:14:51.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 16 22:14:51.598: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 16 22:14:52.160: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 16 22:14:54.451: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264092, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264092, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264092, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264092, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 22:14:56.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264092, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264092, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264092, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264092, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 22:14:59.105: INFO: Waited 644.98058ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:15:00.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-1410" for this suite. • [SLOW TEST:8.838 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":236,"skipped":3776,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:15:00.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:15:13.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2377" for this suite. • [SLOW TEST:13.550 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":237,"skipped":3809,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:15:13.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 16 22:15:14.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-630' May 16 22:15:15.040: INFO: stderr: "" May 16 22:15:15.040: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 16 22:15:15.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-630' May 16 22:15:15.188: INFO: stderr: "" May 16 22:15:15.188: INFO: stdout: "update-demo-nautilus-cgqzp update-demo-nautilus-nrr8k " May 16 22:15:15.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgqzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-630' May 16 22:15:15.294: INFO: stderr: "" May 16 22:15:15.294: INFO: stdout: "" May 16 22:15:15.294: INFO: update-demo-nautilus-cgqzp is created but not running May 16 22:15:20.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-630' May 16 22:15:20.401: INFO: stderr: "" May 16 22:15:20.401: INFO: stdout: "update-demo-nautilus-cgqzp update-demo-nautilus-nrr8k " May 16 22:15:20.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgqzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-630' May 16 22:15:20.504: INFO: stderr: "" May 16 22:15:20.504: INFO: stdout: "true" May 16 22:15:20.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgqzp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-630' May 16 22:15:20.605: INFO: stderr: "" May 16 22:15:20.605: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 22:15:20.605: INFO: validating pod update-demo-nautilus-cgqzp May 16 22:15:20.609: INFO: got data: { "image": "nautilus.jpg" } May 16 22:15:20.609: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 22:15:20.609: INFO: update-demo-nautilus-cgqzp is verified up and running May 16 22:15:20.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nrr8k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-630' May 16 22:15:20.700: INFO: stderr: "" May 16 22:15:20.700: INFO: stdout: "true" May 16 22:15:20.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nrr8k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-630' May 16 22:15:20.807: INFO: stderr: "" May 16 22:15:20.807: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 22:15:20.807: INFO: validating pod update-demo-nautilus-nrr8k May 16 22:15:20.816: INFO: got data: { "image": "nautilus.jpg" } May 16 22:15:20.816: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 22:15:20.816: INFO: update-demo-nautilus-nrr8k is verified up and running STEP: scaling down the replication controller May 16 22:15:20.819: INFO: scanned /root for discovery docs: May 16 22:15:20.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-630' May 16 22:15:21.950: INFO: stderr: "" May 16 22:15:21.950: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 16 22:15:21.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-630' May 16 22:15:22.047: INFO: stderr: "" May 16 22:15:22.047: INFO: stdout: "update-demo-nautilus-cgqzp update-demo-nautilus-nrr8k " STEP: Replicas for name=update-demo: expected=1 actual=2 May 16 22:15:27.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-630' May 16 22:15:27.178: INFO: stderr: "" May 16 22:15:27.178: INFO: stdout: "update-demo-nautilus-nrr8k " May 16 22:15:27.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nrr8k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-630' May 16 22:15:27.286: INFO: stderr: "" May 16 22:15:27.286: INFO: stdout: "true" May 16 22:15:27.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nrr8k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-630' May 16 22:15:27.415: INFO: stderr: "" May 16 22:15:27.415: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 22:15:27.415: INFO: validating pod update-demo-nautilus-nrr8k May 16 22:15:27.419: INFO: got data: { "image": "nautilus.jpg" } May 16 22:15:27.419: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 22:15:27.419: INFO: update-demo-nautilus-nrr8k is verified up and running STEP: scaling up the replication controller May 16 22:15:27.422: INFO: scanned /root for discovery docs: May 16 22:15:27.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-630' May 16 22:15:28.551: INFO: stderr: "" May 16 22:15:28.552: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 16 22:15:28.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-630' May 16 22:15:28.651: INFO: stderr: "" May 16 22:15:28.651: INFO: stdout: "update-demo-nautilus-nmvjk update-demo-nautilus-nrr8k " May 16 22:15:28.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nmvjk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-630' May 16 22:15:28.746: INFO: stderr: "" May 16 22:15:28.746: INFO: stdout: "" May 16 22:15:28.746: INFO: update-demo-nautilus-nmvjk is created but not running May 16 22:15:33.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-630' May 16 22:15:33.858: INFO: stderr: "" May 16 22:15:33.858: INFO: stdout: "update-demo-nautilus-nmvjk update-demo-nautilus-nrr8k " May 16 22:15:33.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nmvjk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-630' May 16 22:15:33.956: INFO: stderr: "" May 16 22:15:33.956: INFO: stdout: "true" May 16 22:15:33.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nmvjk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-630' May 16 22:15:34.051: INFO: stderr: "" May 16 22:15:34.051: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 22:15:34.051: INFO: validating pod update-demo-nautilus-nmvjk May 16 22:15:34.055: INFO: got data: { "image": "nautilus.jpg" } May 16 22:15:34.055: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 22:15:34.055: INFO: update-demo-nautilus-nmvjk is verified up and running May 16 22:15:34.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nrr8k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-630' May 16 22:15:34.152: INFO: stderr: "" May 16 22:15:34.152: INFO: stdout: "true" May 16 22:15:34.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nrr8k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-630' May 16 22:15:34.239: INFO: stderr: "" May 16 22:15:34.239: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 22:15:34.239: INFO: validating pod update-demo-nautilus-nrr8k May 16 22:15:34.242: INFO: got data: { "image": "nautilus.jpg" } May 16 22:15:34.242: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 22:15:34.242: INFO: update-demo-nautilus-nrr8k is verified up and running STEP: using delete to clean up resources May 16 22:15:34.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-630' May 16 22:15:34.340: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 22:15:34.340: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 16 22:15:34.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-630' May 16 22:15:34.440: INFO: stderr: "No resources found in kubectl-630 namespace.\n" May 16 22:15:34.440: INFO: stdout: "" May 16 22:15:34.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-630 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 16 22:15:34.536: INFO: stderr: "" May 16 22:15:34.536: INFO: stdout: "update-demo-nautilus-nmvjk\nupdate-demo-nautilus-nrr8k\n" May 16 22:15:35.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-630' May 16 22:15:35.133: INFO: stderr: "No resources found in kubectl-630 namespace.\n" May 16 22:15:35.133: INFO: stdout: "" May 16 22:15:35.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-630 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 16 22:15:35.230: INFO: stderr: "" May 16 22:15:35.230: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:15:35.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-630" for this suite. • [SLOW TEST:21.305 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":238,"skipped":3826,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:15:35.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 22:15:35.287: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 16 22:15:35.325: INFO: Pod name sample-pod: Found 0 pods out of 1 May 16 22:15:40.344: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 16 22:15:40.344: INFO: Creating deployment "test-rolling-update-deployment" May 16 22:15:40.348: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 16 22:15:40.365: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 16 22:15:42.380: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 16 22:15:42.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264140, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264140, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264140, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264140, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 22:15:44.402: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 16 22:15:44.438: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2167 /apis/apps/v1/namespaces/deployment-2167/deployments/test-rolling-update-deployment 2f6020ca-ab18-4da0-a030-54ac0f45ac5b 16751983 1 2020-05-16 22:15:40 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d97cd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-16 22:15:40 +0000 UTC,LastTransitionTime:2020-05-16 22:15:40 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-16 22:15:43 +0000 UTC,LastTransitionTime:2020-05-16 22:15:40 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 16 22:15:44.441: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-2167 /apis/apps/v1/namespaces/deployment-2167/replicasets/test-rolling-update-deployment-67cf4f6444 d4194939-7d32-459c-9229-eccc6cda8bc7 16751972 1 2020-05-16 22:15:40 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 2f6020ca-ab18-4da0-a030-54ac0f45ac5b 0xc00351cb97 0xc00351cb98}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00351cc08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 16 22:15:44.441: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 16 22:15:44.441: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2167 /apis/apps/v1/namespaces/deployment-2167/replicasets/test-rolling-update-controller b17fb564-1f5c-447d-aa59-c30879d8a91d 16751981 2 2020-05-16 22:15:35 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 2f6020ca-ab18-4da0-a030-54ac0f45ac5b 0xc00351cac7 0xc00351cac8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00351cb28 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 16 22:15:44.445: INFO: Pod "test-rolling-update-deployment-67cf4f6444-bdtmk" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-bdtmk test-rolling-update-deployment-67cf4f6444- deployment-2167 /api/v1/namespaces/deployment-2167/pods/test-rolling-update-deployment-67cf4f6444-bdtmk c8040960-81ab-4364-ad31-17653ab3e3de 16751971 0 2020-05-16 22:15:40 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 d4194939-7d32-459c-9229-eccc6cda8bc7 0xc00351d6c7 0xc00351d6c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pjsvl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pjsvl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pjsvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 22:15:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 22:15:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 22:15:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 22:15:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.215,StartTime:2020-05-16 22:15:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 22:15:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://a025fa5e79a9362044d78c7575464062f4eeb100b02607de869dd3687e2ea7df,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.215,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:15:44.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2167" for this suite. • [SLOW TEST:9.214 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":239,"skipped":3865,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:15:44.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8653 STEP: creating a selector STEP: Creating the service pods in kubernetes May 16 22:15:44.499: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 16 22:16:12.667: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.159:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8653 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 22:16:12.667: INFO: >>> kubeConfig: /root/.kube/config I0516 22:16:12.711323 6 log.go:172] (0xc002b960b0) (0xc001bf2140) Create stream I0516 22:16:12.711354 6 log.go:172] (0xc002b960b0) (0xc001bf2140) Stream added, broadcasting: 1 I0516 22:16:12.713420 6 log.go:172] (0xc002b960b0) Reply frame received for 1 I0516 22:16:12.713452 6 log.go:172] (0xc002b960b0) (0xc001bf21e0) Create stream I0516 22:16:12.713466 6 log.go:172] (0xc002b960b0) (0xc001bf21e0) Stream added, broadcasting: 3 I0516 22:16:12.714564 6 log.go:172] (0xc002b960b0) Reply frame received for 3 I0516 22:16:12.714630 6 log.go:172] (0xc002b960b0) (0xc000d4c140) Create stream I0516 22:16:12.714650 6 log.go:172] (0xc002b960b0) (0xc000d4c140) Stream added, broadcasting: 5 I0516 22:16:12.715742 6 log.go:172] (0xc002b960b0) Reply frame received for 5 I0516 22:16:12.819654 6 log.go:172] (0xc002b960b0) Data frame received for 3 I0516 22:16:12.819692 6 log.go:172] (0xc001bf21e0) (3) Data frame handling I0516 22:16:12.819711 6 log.go:172] (0xc001bf21e0) (3) Data frame sent I0516 22:16:12.819722 6 log.go:172] (0xc002b960b0) Data frame received for 3 I0516 22:16:12.819731 6 log.go:172] (0xc001bf21e0) (3) Data frame handling I0516 22:16:12.820169 6 log.go:172] (0xc002b960b0) Data frame received for 5 I0516 22:16:12.820250 6 log.go:172] (0xc000d4c140) (5) Data frame handling I0516 22:16:12.821858 6 log.go:172] (0xc002b960b0) Data frame received for 1 I0516 22:16:12.821912 6 log.go:172] (0xc001bf2140) (1) Data frame handling I0516 22:16:12.821957 6 log.go:172] (0xc001bf2140) (1) Data frame sent I0516 22:16:12.822002 6 log.go:172] (0xc002b960b0) (0xc001bf2140) Stream removed, broadcasting: 1 I0516 22:16:12.822054 6 log.go:172] (0xc002b960b0) Go away received I0516 22:16:12.822192 6 log.go:172] (0xc002b960b0) (0xc001bf2140) Stream removed, broadcasting: 1 I0516 22:16:12.822220 6 log.go:172] (0xc002b960b0) (0xc001bf21e0) Stream removed, broadcasting: 3 I0516 22:16:12.822231 6 log.go:172] (0xc002b960b0) (0xc000d4c140) Stream removed, broadcasting: 5 May 16 22:16:12.822: INFO: Found all expected endpoints: [netserver-0] May 16 22:16:12.825: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.216:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8653 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 22:16:12.825: INFO: >>> kubeConfig: /root/.kube/config I0516 22:16:12.849979 6 log.go:172] (0xc00309a370) (0xc0018d3220) Create stream I0516 22:16:12.850011 6 log.go:172] (0xc00309a370) (0xc0018d3220) Stream added, broadcasting: 1 I0516 22:16:12.851611 6 log.go:172] (0xc00309a370) Reply frame received for 1 I0516 22:16:12.851648 6 log.go:172] (0xc00309a370) (0xc001bf2460) Create stream I0516 22:16:12.851662 6 log.go:172] (0xc00309a370) (0xc001bf2460) Stream added, broadcasting: 3 I0516 22:16:12.852376 6 log.go:172] (0xc00309a370) Reply frame received for 3 I0516 22:16:12.852401 6 log.go:172] (0xc00309a370) (0xc000d4c1e0) Create stream I0516 22:16:12.852410 6 log.go:172] (0xc00309a370) (0xc000d4c1e0) Stream added, broadcasting: 5 I0516 22:16:12.853288 6 log.go:172] (0xc00309a370) Reply frame received for 5 I0516 22:16:12.923096 6 log.go:172] (0xc00309a370) Data frame received for 5 I0516 22:16:12.923146 6 log.go:172] (0xc000d4c1e0) (5) Data frame handling I0516 22:16:12.923166 6 log.go:172] (0xc00309a370) Data frame received for 3 I0516 22:16:12.923172 6 log.go:172] (0xc001bf2460) (3) Data frame handling I0516 22:16:12.923179 6 log.go:172] (0xc001bf2460) (3) Data frame sent I0516 22:16:12.923286 6 log.go:172] (0xc00309a370) Data frame received for 3 I0516 22:16:12.923315 6 log.go:172] (0xc001bf2460) (3) Data frame handling I0516 22:16:12.924720 6 log.go:172] (0xc00309a370) Data frame received for 1 I0516 22:16:12.924733 6 log.go:172] (0xc0018d3220) (1) Data frame handling I0516 22:16:12.924753 6 log.go:172] (0xc0018d3220) (1) Data frame sent I0516 22:16:12.924784 6 log.go:172] (0xc00309a370) (0xc0018d3220) Stream removed, broadcasting: 1 I0516 22:16:12.924803 6 log.go:172] (0xc00309a370) Go away received I0516 22:16:12.924900 6 log.go:172] (0xc00309a370) (0xc0018d3220) Stream removed, broadcasting: 1 I0516 22:16:12.924923 6 log.go:172] (0xc00309a370) (0xc001bf2460) Stream removed, broadcasting: 3 I0516 22:16:12.924935 6 log.go:172] (0xc00309a370) (0xc000d4c1e0) Stream removed, broadcasting: 5 May 16 22:16:12.924: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:16:12.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8653" for this suite. • [SLOW TEST:28.513 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3871,"failed":0} S ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:16:12.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 16 22:16:13.083: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a4766d1-f824-4c04-a742-d0783947517b" in namespace "downward-api-4590" to be "success or failure" May 16 22:16:13.096: INFO: Pod "downwardapi-volume-5a4766d1-f824-4c04-a742-d0783947517b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.122081ms May 16 22:16:15.105: INFO: Pod "downwardapi-volume-5a4766d1-f824-4c04-a742-d0783947517b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021161589s May 16 22:16:17.108: INFO: Pod "downwardapi-volume-5a4766d1-f824-4c04-a742-d0783947517b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025124386s STEP: Saw pod success May 16 22:16:17.109: INFO: Pod "downwardapi-volume-5a4766d1-f824-4c04-a742-d0783947517b" satisfied condition "success or failure" May 16 22:16:17.111: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-5a4766d1-f824-4c04-a742-d0783947517b container client-container: STEP: delete the pod May 16 22:16:17.140: INFO: Waiting for pod downwardapi-volume-5a4766d1-f824-4c04-a742-d0783947517b to disappear May 16 22:16:17.144: INFO: Pod downwardapi-volume-5a4766d1-f824-4c04-a742-d0783947517b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:16:17.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4590" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3872,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:16:17.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-4qrsw in namespace proxy-1917 I0516 22:16:17.302707 6 runners.go:189] Created replication controller with name: proxy-service-4qrsw, namespace: proxy-1917, replica count: 1 I0516 22:16:18.353331 6 runners.go:189] proxy-service-4qrsw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 22:16:19.353544 6 runners.go:189] proxy-service-4qrsw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 22:16:20.353734 6 runners.go:189] proxy-service-4qrsw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 22:16:21.354021 6 runners.go:189] proxy-service-4qrsw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0516 22:16:22.354229 6 runners.go:189] proxy-service-4qrsw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0516 22:16:23.354458 6 runners.go:189] proxy-service-4qrsw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0516 22:16:24.354680 6 runners.go:189] proxy-service-4qrsw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0516 22:16:25.354862 6 runners.go:189] proxy-service-4qrsw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0516 22:16:26.355079 6 runners.go:189] proxy-service-4qrsw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0516 22:16:27.355265 6 runners.go:189] proxy-service-4qrsw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0516 22:16:28.355491 6 runners.go:189] proxy-service-4qrsw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0516 22:16:29.355679 6 runners.go:189] proxy-service-4qrsw Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 16 22:16:29.358: INFO: setup took 12.122774469s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 16 22:16:29.366: INFO: (0) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 7.186067ms) May 16 22:16:29.376: INFO: (0) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 17.557587ms) May 16 22:16:29.377: INFO: (0) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz/proxy/: test (200; 18.826541ms) May 16 22:16:29.377: INFO: (0) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:1080/proxy/: test<... (200; 18.794552ms) May 16 22:16:29.377: INFO: (0) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 18.890157ms) May 16 22:16:29.391: INFO: (0) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:1080/proxy/: ... (200; 32.71013ms) May 16 22:16:29.391: INFO: (0) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 32.849007ms) May 16 22:16:29.392: INFO: (0) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:460/proxy/: tls baz (200; 33.179557ms) May 16 22:16:29.397: INFO: (0) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:462/proxy/: tls qux (200; 37.956914ms) May 16 22:16:29.405: INFO: (0) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname1/proxy/: foo (200; 46.55704ms) May 16 22:16:29.406: INFO: (0) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname1/proxy/: foo (200; 47.325153ms) May 16 22:16:29.407: INFO: (0) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname2/proxy/: tls qux (200; 48.22074ms) May 16 22:16:29.410: INFO: (0) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname1/proxy/: tls baz (200; 51.299715ms) May 16 22:16:29.410: INFO: (0) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname2/proxy/: bar (200; 51.410705ms) May 16 22:16:29.410: INFO: (0) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname2/proxy/: bar (200; 51.306909ms) May 16 22:16:29.410: INFO: (0) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: test (200; 4.113274ms) May 16 22:16:29.414: INFO: (1) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:462/proxy/: tls qux (200; 4.154163ms) May 16 22:16:29.414: INFO: (1) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 4.157877ms) May 16 22:16:29.414: INFO: (1) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 4.186454ms) May 16 22:16:29.414: INFO: (1) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:1080/proxy/: test<... (200; 4.238742ms) May 16 22:16:29.414: INFO: (1) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:460/proxy/: tls baz (200; 4.223609ms) May 16 22:16:29.414: INFO: (1) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:1080/proxy/: ... (200; 4.271513ms) May 16 22:16:29.414: INFO: (1) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 4.174109ms) May 16 22:16:29.414: INFO: (1) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 4.312324ms) May 16 22:16:29.415: INFO: (1) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: test (200; 3.023576ms) May 16 22:16:29.419: INFO: (2) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:1080/proxy/: test<... (200; 3.0185ms) May 16 22:16:29.419: INFO: (2) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:462/proxy/: tls qux (200; 3.289955ms) May 16 22:16:29.419: INFO: (2) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:1080/proxy/: ... (200; 3.307438ms) May 16 22:16:29.420: INFO: (2) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:460/proxy/: tls baz (200; 3.618589ms) May 16 22:16:29.420: INFO: (2) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: test (200; 3.925008ms) May 16 22:16:29.425: INFO: (3) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:460/proxy/: tls baz (200; 4.002015ms) May 16 22:16:29.425: INFO: (3) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 4.020951ms) May 16 22:16:29.425: INFO: (3) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:462/proxy/: tls qux (200; 4.011905ms) May 16 22:16:29.426: INFO: (3) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:1080/proxy/: ... (200; 4.074681ms) May 16 22:16:29.426: INFO: (3) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 4.055507ms) May 16 22:16:29.426: INFO: (3) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: test<... (200; 4.215502ms) May 16 22:16:29.426: INFO: (3) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname2/proxy/: bar (200; 4.382337ms) May 16 22:16:29.427: INFO: (3) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname1/proxy/: foo (200; 5.206247ms) May 16 22:16:29.427: INFO: (3) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname1/proxy/: foo (200; 5.261831ms) May 16 22:16:29.427: INFO: (3) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname2/proxy/: bar (200; 5.323381ms) May 16 22:16:29.427: INFO: (3) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname2/proxy/: tls qux (200; 5.298386ms) May 16 22:16:29.430: INFO: (4) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 2.824236ms) May 16 22:16:29.430: INFO: (4) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:1080/proxy/: ... (200; 2.871454ms) May 16 22:16:29.430: INFO: (4) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:462/proxy/: tls qux (200; 2.722343ms) May 16 22:16:29.430: INFO: (4) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 2.92551ms) May 16 22:16:29.430: INFO: (4) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 2.938743ms) May 16 22:16:29.430: INFO: (4) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: test<... (200; 3.094778ms) May 16 22:16:29.430: INFO: (4) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz/proxy/: test (200; 3.392145ms) May 16 22:16:29.431: INFO: (4) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname1/proxy/: foo (200; 3.610876ms) May 16 22:16:29.431: INFO: (4) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname2/proxy/: bar (200; 3.922433ms) May 16 22:16:29.431: INFO: (4) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname1/proxy/: tls baz (200; 3.904948ms) May 16 22:16:29.431: INFO: (4) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname2/proxy/: tls qux (200; 3.932483ms) May 16 22:16:29.431: INFO: (4) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname2/proxy/: bar (200; 4.387145ms) May 16 22:16:29.431: INFO: (4) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname1/proxy/: foo (200; 4.559724ms) May 16 22:16:29.434: INFO: (5) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:460/proxy/: tls baz (200; 2.618111ms) May 16 22:16:29.434: INFO: (5) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 2.827116ms) May 16 22:16:29.434: INFO: (5) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: test (200; 4.893409ms) May 16 22:16:29.436: INFO: (5) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 4.965695ms) May 16 22:16:29.437: INFO: (5) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:1080/proxy/: test<... (200; 4.958969ms) May 16 22:16:29.437: INFO: (5) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname2/proxy/: tls qux (200; 5.024675ms) May 16 22:16:29.437: INFO: (5) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:1080/proxy/: ... (200; 5.168493ms) May 16 22:16:29.437: INFO: (5) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname1/proxy/: foo (200; 5.206764ms) May 16 22:16:29.437: INFO: (5) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 5.360611ms) May 16 22:16:29.437: INFO: (5) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 5.694147ms) May 16 22:16:29.442: INFO: (6) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:462/proxy/: tls qux (200; 4.636559ms) May 16 22:16:29.442: INFO: (6) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname2/proxy/: bar (200; 4.648099ms) May 16 22:16:29.442: INFO: (6) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 5.124092ms) May 16 22:16:29.442: INFO: (6) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:1080/proxy/: test<... (200; 5.186777ms) May 16 22:16:29.443: INFO: (6) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:1080/proxy/: ... (200; 5.222146ms) May 16 22:16:29.443: INFO: (6) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname2/proxy/: tls qux (200; 5.253212ms) May 16 22:16:29.443: INFO: (6) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname1/proxy/: foo (200; 5.235602ms) May 16 22:16:29.443: INFO: (6) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname1/proxy/: tls baz (200; 5.316299ms) May 16 22:16:29.442: INFO: (6) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: test (200; 5.466217ms) May 16 22:16:29.443: INFO: (6) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname1/proxy/: foo (200; 5.559659ms) May 16 22:16:29.443: INFO: (6) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:460/proxy/: tls baz (200; 5.522411ms) May 16 22:16:29.443: INFO: (6) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 5.644152ms) May 16 22:16:29.446: INFO: (7) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 3.00636ms) May 16 22:16:29.446: INFO: (7) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 3.060387ms) May 16 22:16:29.446: INFO: (7) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 3.052387ms) May 16 22:16:29.446: INFO: (7) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:1080/proxy/: ... (200; 3.242312ms) May 16 22:16:29.446: INFO: (7) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:1080/proxy/: test<... (200; 3.223948ms) May 16 22:16:29.446: INFO: (7) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz/proxy/: test (200; 3.358251ms) May 16 22:16:29.447: INFO: (7) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 3.44873ms) May 16 22:16:29.447: INFO: (7) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:460/proxy/: tls baz (200; 3.442307ms) May 16 22:16:29.447: INFO: (7) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: ... (200; 3.875965ms) May 16 22:16:29.452: INFO: (8) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz/proxy/: test (200; 4.010651ms) May 16 22:16:29.452: INFO: (8) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: test<... (200; 4.236138ms) May 16 22:16:29.453: INFO: (8) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:462/proxy/: tls qux (200; 4.637937ms) May 16 22:16:29.453: INFO: (8) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname1/proxy/: foo (200; 4.595659ms) May 16 22:16:29.453: INFO: (8) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname1/proxy/: foo (200; 5.224644ms) May 16 22:16:29.453: INFO: (8) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname1/proxy/: tls baz (200; 5.213253ms) May 16 22:16:29.453: INFO: (8) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname2/proxy/: bar (200; 5.281356ms) May 16 22:16:29.454: INFO: (8) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname2/proxy/: bar (200; 5.672581ms) May 16 22:16:29.454: INFO: (8) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname2/proxy/: tls qux (200; 5.630078ms) May 16 22:16:29.456: INFO: (9) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 2.449039ms) May 16 22:16:29.457: INFO: (9) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 3.218676ms) May 16 22:16:29.457: INFO: (9) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:460/proxy/: tls baz (200; 3.218403ms) May 16 22:16:29.457: INFO: (9) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: test<... (200; 6.031848ms) May 16 22:16:29.460: INFO: (9) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:462/proxy/: tls qux (200; 6.066367ms) May 16 22:16:29.462: INFO: (9) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz/proxy/: test (200; 7.843239ms) May 16 22:16:29.462: INFO: (9) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 7.960553ms) May 16 22:16:29.463: INFO: (9) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname1/proxy/: foo (200; 8.635886ms) May 16 22:16:29.463: INFO: (9) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:1080/proxy/: ... (200; 8.562191ms) May 16 22:16:29.463: INFO: (9) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname1/proxy/: foo (200; 8.579994ms) May 16 22:16:29.463: INFO: (9) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname1/proxy/: tls baz (200; 8.561614ms) May 16 22:16:29.463: INFO: (9) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname2/proxy/: tls qux (200; 8.68426ms) May 16 22:16:29.463: INFO: (9) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname2/proxy/: bar (200; 8.615049ms) May 16 22:16:29.463: INFO: (9) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname2/proxy/: bar (200; 8.682964ms) May 16 22:16:29.465: INFO: (10) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:1080/proxy/: test<... (200; 1.786119ms) May 16 22:16:29.466: INFO: (10) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 2.960155ms) May 16 22:16:29.466: INFO: (10) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:460/proxy/: tls baz (200; 3.16314ms) May 16 22:16:29.466: INFO: (10) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 3.179196ms) May 16 22:16:29.466: INFO: (10) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 3.310377ms) May 16 22:16:29.466: INFO: (10) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:462/proxy/: tls qux (200; 3.471386ms) May 16 22:16:29.466: INFO: (10) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz/proxy/: test (200; 3.44134ms) May 16 22:16:29.466: INFO: (10) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 3.488264ms) May 16 22:16:29.466: INFO: (10) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:1080/proxy/: ... (200; 3.449623ms) May 16 22:16:29.467: INFO: (10) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: test<... (200; 2.41872ms) May 16 22:16:29.470: INFO: (11) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:462/proxy/: tls qux (200; 3.115748ms) May 16 22:16:29.470: INFO: (11) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 3.199867ms) May 16 22:16:29.471: INFO: (11) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:1080/proxy/: ... (200; 3.344425ms) May 16 22:16:29.471: INFO: (11) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 3.422856ms) May 16 22:16:29.471: INFO: (11) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 3.413298ms) May 16 22:16:29.471: INFO: (11) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: test (200; 3.709961ms) May 16 22:16:29.471: INFO: (11) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname2/proxy/: bar (200; 4.245601ms) May 16 22:16:29.471: INFO: (11) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname1/proxy/: foo (200; 4.293847ms) May 16 22:16:29.471: INFO: (11) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname2/proxy/: bar (200; 4.305589ms) May 16 22:16:29.471: INFO: (11) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname1/proxy/: foo (200; 4.363555ms) May 16 22:16:29.471: INFO: (11) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname2/proxy/: tls qux (200; 4.362271ms) May 16 22:16:29.471: INFO: (11) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname1/proxy/: tls baz (200; 4.33846ms) May 16 22:16:29.475: INFO: (12) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz/proxy/: test (200; 3.27547ms) May 16 22:16:29.475: INFO: (12) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:462/proxy/: tls qux (200; 3.30705ms) May 16 22:16:29.475: INFO: (12) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 3.337765ms) May 16 22:16:29.475: INFO: (12) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 3.354926ms) May 16 22:16:29.475: INFO: (12) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 3.33739ms) May 16 22:16:29.475: INFO: (12) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: test<... (200; 3.346069ms) May 16 22:16:29.475: INFO: (12) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:1080/proxy/: ... (200; 3.436991ms) May 16 22:16:29.475: INFO: (12) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:460/proxy/: tls baz (200; 3.482667ms) May 16 22:16:29.475: INFO: (12) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 3.820372ms) May 16 22:16:29.475: INFO: (12) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname2/proxy/: bar (200; 3.827542ms) May 16 22:16:29.475: INFO: (12) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname2/proxy/: tls qux (200; 3.82718ms) May 16 22:16:29.475: INFO: (12) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname1/proxy/: foo (200; 3.803153ms) May 16 22:16:29.476: INFO: (12) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname1/proxy/: foo (200; 4.36047ms) May 16 22:16:29.476: INFO: (12) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname2/proxy/: bar (200; 4.447954ms) May 16 22:16:29.476: INFO: (12) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname1/proxy/: tls baz (200; 4.417691ms) May 16 22:16:29.480: INFO: (13) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:460/proxy/: tls baz (200; 4.069643ms) May 16 22:16:29.480: INFO: (13) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 4.092421ms) May 16 22:16:29.480: INFO: (13) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:1080/proxy/: ... (200; 4.157611ms) May 16 22:16:29.480: INFO: (13) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: test<... (200; 4.158008ms) May 16 22:16:29.480: INFO: (13) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:462/proxy/: tls qux (200; 4.143741ms) May 16 22:16:29.480: INFO: (13) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz/proxy/: test (200; 4.140178ms) May 16 22:16:29.480: INFO: (13) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 4.155469ms) May 16 22:16:29.481: INFO: (13) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname1/proxy/: tls baz (200; 4.660862ms) May 16 22:16:29.481: INFO: (13) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname1/proxy/: foo (200; 5.17795ms) May 16 22:16:29.481: INFO: (13) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname2/proxy/: bar (200; 5.184642ms) May 16 22:16:29.481: INFO: (13) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname1/proxy/: foo (200; 5.290476ms) May 16 22:16:29.481: INFO: (13) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname2/proxy/: tls qux (200; 5.25302ms) May 16 22:16:29.481: INFO: (13) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname2/proxy/: bar (200; 5.220399ms) May 16 22:16:29.485: INFO: (14) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 3.946422ms) May 16 22:16:29.485: INFO: (14) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname1/proxy/: foo (200; 3.917105ms) May 16 22:16:29.487: INFO: (14) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 5.72472ms) May 16 22:16:29.487: INFO: (14) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:462/proxy/: tls qux (200; 5.800361ms) May 16 22:16:29.487: INFO: (14) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:1080/proxy/: test<... (200; 5.811595ms) May 16 22:16:29.487: INFO: (14) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: ... (200; 5.843609ms) May 16 22:16:29.487: INFO: (14) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname2/proxy/: bar (200; 5.874546ms) May 16 22:16:29.487: INFO: (14) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz/proxy/: test (200; 5.839948ms) May 16 22:16:29.487: INFO: (14) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname1/proxy/: tls baz (200; 5.847815ms) May 16 22:16:29.487: INFO: (14) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname1/proxy/: foo (200; 5.867754ms) May 16 22:16:29.487: INFO: (14) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:460/proxy/: tls baz (200; 5.914675ms) May 16 22:16:29.487: INFO: (14) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname2/proxy/: tls qux (200; 5.849514ms) May 16 22:16:29.487: INFO: (14) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 5.996385ms) May 16 22:16:29.487: INFO: (14) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname2/proxy/: bar (200; 5.892299ms) May 16 22:16:29.487: INFO: (14) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 5.963884ms) May 16 22:16:29.490: INFO: (15) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 2.094981ms) May 16 22:16:29.490: INFO: (15) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:1080/proxy/: ... (200; 2.955054ms) May 16 22:16:29.490: INFO: (15) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz/proxy/: test (200; 2.964963ms) May 16 22:16:29.491: INFO: (15) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 3.18854ms) May 16 22:16:29.491: INFO: (15) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 3.150685ms) May 16 22:16:29.491: INFO: (15) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:460/proxy/: tls baz (200; 3.178141ms) May 16 22:16:29.491: INFO: (15) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 3.154582ms) May 16 22:16:29.491: INFO: (15) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:1080/proxy/: test<... (200; 3.224096ms) May 16 22:16:29.491: INFO: (15) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:462/proxy/: tls qux (200; 3.175587ms) May 16 22:16:29.491: INFO: (15) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: ... (200; 6.597387ms) May 16 22:16:29.499: INFO: (16) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: test (200; 6.733035ms) May 16 22:16:29.499: INFO: (16) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 6.719227ms) May 16 22:16:29.499: INFO: (16) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:460/proxy/: tls baz (200; 6.748707ms) May 16 22:16:29.499: INFO: (16) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:1080/proxy/: test<... (200; 6.762342ms) May 16 22:16:29.504: INFO: (17) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz/proxy/: test (200; 4.49171ms) May 16 22:16:29.504: INFO: (17) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 4.58536ms) May 16 22:16:29.504: INFO: (17) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: ... (200; 4.64297ms) May 16 22:16:29.504: INFO: (17) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 4.581943ms) May 16 22:16:29.504: INFO: (17) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:462/proxy/: tls qux (200; 4.625914ms) May 16 22:16:29.504: INFO: (17) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 4.770381ms) May 16 22:16:29.504: INFO: (17) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname1/proxy/: foo (200; 4.766908ms) May 16 22:16:29.504: INFO: (17) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:1080/proxy/: test<... (200; 4.845986ms) May 16 22:16:29.504: INFO: (17) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:460/proxy/: tls baz (200; 4.803084ms) May 16 22:16:29.505: INFO: (17) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname2/proxy/: bar (200; 5.696028ms) May 16 22:16:29.505: INFO: (17) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname1/proxy/: foo (200; 5.801793ms) May 16 22:16:29.505: INFO: (17) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname2/proxy/: tls qux (200; 5.754316ms) May 16 22:16:29.505: INFO: (17) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname2/proxy/: bar (200; 5.782396ms) May 16 22:16:29.505: INFO: (17) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname1/proxy/: tls baz (200; 5.830834ms) May 16 22:16:29.507: INFO: (18) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 1.905487ms) May 16 22:16:29.508: INFO: (18) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: ... (200; 3.626503ms) May 16 22:16:29.509: INFO: (18) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:160/proxy/: foo (200; 3.679201ms) May 16 22:16:29.509: INFO: (18) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:1080/proxy/: test<... (200; 3.850029ms) May 16 22:16:29.509: INFO: (18) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:462/proxy/: tls qux (200; 3.844824ms) May 16 22:16:29.509: INFO: (18) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 3.847233ms) May 16 22:16:29.509: INFO: (18) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz/proxy/: test (200; 3.854723ms) May 16 22:16:29.510: INFO: (18) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname2/proxy/: bar (200; 4.760193ms) May 16 22:16:29.510: INFO: (18) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname1/proxy/: foo (200; 4.731077ms) May 16 22:16:29.510: INFO: (18) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname2/proxy/: tls qux (200; 4.757499ms) May 16 22:16:29.511: INFO: (18) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname2/proxy/: bar (200; 5.373536ms) May 16 22:16:29.511: INFO: (18) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname1/proxy/: foo (200; 5.468157ms) May 16 22:16:29.511: INFO: (18) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname1/proxy/: tls baz (200; 5.520297ms) May 16 22:16:29.514: INFO: (19) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 2.958505ms) May 16 22:16:29.514: INFO: (19) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:443/proxy/: test (200; 3.488378ms) May 16 22:16:29.515: INFO: (19) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:1080/proxy/: test<... (200; 3.683772ms) May 16 22:16:29.515: INFO: (19) /api/v1/namespaces/proxy-1917/pods/proxy-service-4qrsw-h6cfz:162/proxy/: bar (200; 3.762218ms) May 16 22:16:29.515: INFO: (19) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:460/proxy/: tls baz (200; 3.747814ms) May 16 22:16:29.515: INFO: (19) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname1/proxy/: foo (200; 3.809548ms) May 16 22:16:29.515: INFO: (19) /api/v1/namespaces/proxy-1917/pods/http:proxy-service-4qrsw-h6cfz:1080/proxy/: ... (200; 3.803262ms) May 16 22:16:29.517: INFO: (19) /api/v1/namespaces/proxy-1917/services/http:proxy-service-4qrsw:portname2/proxy/: bar (200; 6.301226ms) May 16 22:16:29.517: INFO: (19) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname1/proxy/: foo (200; 6.540758ms) May 16 22:16:29.518: INFO: (19) /api/v1/namespaces/proxy-1917/services/proxy-service-4qrsw:portname2/proxy/: bar (200; 6.84756ms) May 16 22:16:29.518: INFO: (19) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname2/proxy/: tls qux (200; 7.478352ms) May 16 22:16:29.518: INFO: (19) /api/v1/namespaces/proxy-1917/services/https:proxy-service-4qrsw:tlsportname1/proxy/: tls baz (200; 7.533549ms) May 16 22:16:29.518: INFO: (19) /api/v1/namespaces/proxy-1917/pods/https:proxy-service-4qrsw-h6cfz:462/proxy/: tls qux (200; 7.513314ms) STEP: deleting ReplicationController proxy-service-4qrsw in namespace proxy-1917, will wait for the garbage collector to delete the pods May 16 22:16:29.578: INFO: Deleting ReplicationController proxy-service-4qrsw took: 7.16887ms May 16 22:16:29.878: INFO: Terminating ReplicationController proxy-service-4qrsw pods took: 300.248823ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:16:32.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1917" for this suite. • [SLOW TEST:15.109 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":242,"skipped":3948,"failed":0} S ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:16:32.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-d6c9ed0c-c3ec-4c73-8da0-e27ceaafc34b in namespace container-probe-7379 May 16 22:16:36.406: INFO: Started pod liveness-d6c9ed0c-c3ec-4c73-8da0-e27ceaafc34b in namespace container-probe-7379 STEP: checking the pod's current state and verifying that restartCount is present May 16 22:16:36.409: INFO: Initial restart count of pod liveness-d6c9ed0c-c3ec-4c73-8da0-e27ceaafc34b is 0 May 16 22:16:54.585: INFO: Restart count of pod container-probe-7379/liveness-d6c9ed0c-c3ec-4c73-8da0-e27ceaafc34b is now 1 (18.176305527s elapsed) May 16 22:17:16.669: INFO: Restart count of pod container-probe-7379/liveness-d6c9ed0c-c3ec-4c73-8da0-e27ceaafc34b is now 2 (40.26004694s elapsed) May 16 22:17:36.727: INFO: Restart count of pod container-probe-7379/liveness-d6c9ed0c-c3ec-4c73-8da0-e27ceaafc34b is now 3 (1m0.31813248s elapsed) May 16 22:17:56.780: INFO: Restart count of pod container-probe-7379/liveness-d6c9ed0c-c3ec-4c73-8da0-e27ceaafc34b is now 4 (1m20.371219708s elapsed) May 16 22:19:04.948: INFO: Restart count of pod container-probe-7379/liveness-d6c9ed0c-c3ec-4c73-8da0-e27ceaafc34b is now 5 (2m28.539165943s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:19:04.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7379" for this suite. • [SLOW TEST:152.715 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":3949,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:19:05.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1361 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-1361 I0516 22:19:05.892265 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-1361, replica count: 2 I0516 22:19:08.942598 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 22:19:11.942893 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 16 22:19:11.942: INFO: Creating new exec pod May 16 22:19:16.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1361 execpodfsvjj -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 16 22:19:17.215: INFO: stderr: "I0516 22:19:17.106625 3341 log.go:172] (0xc000952fd0) (0xc000992460) Create stream\nI0516 22:19:17.106680 3341 log.go:172] (0xc000952fd0) (0xc000992460) Stream added, broadcasting: 1\nI0516 22:19:17.110487 3341 log.go:172] (0xc000952fd0) Reply frame received for 1\nI0516 22:19:17.110527 3341 log.go:172] (0xc000952fd0) (0xc0006a6640) Create stream\nI0516 22:19:17.110538 3341 log.go:172] (0xc000952fd0) (0xc0006a6640) Stream added, broadcasting: 3\nI0516 22:19:17.111351 3341 log.go:172] (0xc000952fd0) Reply frame received for 3\nI0516 22:19:17.111398 3341 log.go:172] (0xc000952fd0) (0xc00048d400) Create stream\nI0516 22:19:17.111408 3341 log.go:172] (0xc000952fd0) (0xc00048d400) Stream added, broadcasting: 5\nI0516 22:19:17.112221 3341 log.go:172] (0xc000952fd0) Reply frame received for 5\nI0516 22:19:17.199263 3341 log.go:172] (0xc000952fd0) Data frame received for 5\nI0516 22:19:17.199304 3341 log.go:172] (0xc00048d400) (5) Data frame handling\nI0516 22:19:17.199348 3341 log.go:172] (0xc00048d400) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0516 22:19:17.201878 3341 log.go:172] (0xc000952fd0) Data frame received for 5\nI0516 22:19:17.201936 3341 log.go:172] (0xc00048d400) (5) Data frame handling\nI0516 22:19:17.201992 3341 log.go:172] (0xc00048d400) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0516 22:19:17.209760 3341 log.go:172] (0xc000952fd0) Data frame received for 3\nI0516 22:19:17.209799 3341 log.go:172] (0xc0006a6640) (3) Data frame handling\nI0516 22:19:17.209830 3341 log.go:172] (0xc000952fd0) Data frame received for 1\nI0516 22:19:17.209865 3341 log.go:172] (0xc000992460) (1) Data frame handling\nI0516 22:19:17.209895 3341 log.go:172] (0xc000992460) (1) Data frame sent\nI0516 22:19:17.209920 3341 log.go:172] (0xc000952fd0) (0xc000992460) Stream removed, broadcasting: 1\nI0516 22:19:17.209946 3341 log.go:172] (0xc000952fd0) Data frame received for 5\nI0516 22:19:17.209961 3341 log.go:172] (0xc00048d400) (5) Data frame handling\nI0516 22:19:17.209982 3341 log.go:172] (0xc000952fd0) Go away received\nI0516 22:19:17.210494 3341 log.go:172] (0xc000952fd0) (0xc000992460) Stream removed, broadcasting: 1\nI0516 22:19:17.210513 3341 log.go:172] (0xc000952fd0) (0xc0006a6640) Stream removed, broadcasting: 3\nI0516 22:19:17.210522 3341 log.go:172] (0xc000952fd0) (0xc00048d400) Stream removed, broadcasting: 5\n" May 16 22:19:17.215: INFO: stdout: "" May 16 22:19:17.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1361 execpodfsvjj -- /bin/sh -x -c nc -zv -t -w 2 10.99.79.32 80' May 16 22:19:17.521: INFO: stderr: "I0516 22:19:17.354296 3361 log.go:172] (0xc0000f4dc0) (0xc0007059a0) Create stream\nI0516 22:19:17.354375 3361 log.go:172] (0xc0000f4dc0) (0xc0007059a0) Stream added, broadcasting: 1\nI0516 22:19:17.357316 3361 log.go:172] (0xc0000f4dc0) Reply frame received for 1\nI0516 22:19:17.357392 3361 log.go:172] (0xc0000f4dc0) (0xc0007840a0) Create stream\nI0516 22:19:17.357402 3361 log.go:172] (0xc0000f4dc0) (0xc0007840a0) Stream added, broadcasting: 3\nI0516 22:19:17.358488 3361 log.go:172] (0xc0000f4dc0) Reply frame received for 3\nI0516 22:19:17.358562 3361 log.go:172] (0xc0000f4dc0) (0xc000705b80) Create stream\nI0516 22:19:17.358584 3361 log.go:172] (0xc0000f4dc0) (0xc000705b80) Stream added, broadcasting: 5\nI0516 22:19:17.359557 3361 log.go:172] (0xc0000f4dc0) Reply frame received for 5\nI0516 22:19:17.516021 3361 log.go:172] (0xc0000f4dc0) Data frame received for 5\nI0516 22:19:17.516059 3361 log.go:172] (0xc0000f4dc0) Data frame received for 3\nI0516 22:19:17.516085 3361 log.go:172] (0xc0007840a0) (3) Data frame handling\nI0516 22:19:17.516102 3361 log.go:172] (0xc000705b80) (5) Data frame handling\nI0516 22:19:17.516111 3361 log.go:172] (0xc000705b80) (5) Data frame sent\nI0516 22:19:17.516118 3361 log.go:172] (0xc0000f4dc0) Data frame received for 5\nI0516 22:19:17.516127 3361 log.go:172] (0xc000705b80) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.79.32 80\nConnection to 10.99.79.32 80 port [tcp/http] succeeded!\nI0516 22:19:17.516920 3361 log.go:172] (0xc0000f4dc0) Data frame received for 1\nI0516 22:19:17.516935 3361 log.go:172] (0xc0007059a0) (1) Data frame handling\nI0516 22:19:17.516961 3361 log.go:172] (0xc0007059a0) (1) Data frame sent\nI0516 22:19:17.516982 3361 log.go:172] (0xc0000f4dc0) (0xc0007059a0) Stream removed, broadcasting: 1\nI0516 22:19:17.517000 3361 log.go:172] (0xc0000f4dc0) Go away received\nI0516 22:19:17.517372 3361 log.go:172] (0xc0000f4dc0) (0xc0007059a0) Stream removed, broadcasting: 1\nI0516 22:19:17.517388 3361 log.go:172] (0xc0000f4dc0) (0xc0007840a0) Stream removed, broadcasting: 3\nI0516 22:19:17.517397 3361 log.go:172] (0xc0000f4dc0) (0xc000705b80) Stream removed, broadcasting: 5\n" May 16 22:19:17.521: INFO: stdout: "" May 16 22:19:17.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1361 execpodfsvjj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31651' May 16 22:19:17.710: INFO: stderr: "I0516 22:19:17.637646 3384 log.go:172] (0xc0007d26e0) (0xc00062e000) Create stream\nI0516 22:19:17.637700 3384 log.go:172] (0xc0007d26e0) (0xc00062e000) Stream added, broadcasting: 1\nI0516 22:19:17.640308 3384 log.go:172] (0xc0007d26e0) Reply frame received for 1\nI0516 22:19:17.640356 3384 log.go:172] (0xc0007d26e0) (0xc0007339a0) Create stream\nI0516 22:19:17.640371 3384 log.go:172] (0xc0007d26e0) (0xc0007339a0) Stream added, broadcasting: 3\nI0516 22:19:17.641710 3384 log.go:172] (0xc0007d26e0) Reply frame received for 3\nI0516 22:19:17.641741 3384 log.go:172] (0xc0007d26e0) (0xc000733b80) Create stream\nI0516 22:19:17.641752 3384 log.go:172] (0xc0007d26e0) (0xc000733b80) Stream added, broadcasting: 5\nI0516 22:19:17.642654 3384 log.go:172] (0xc0007d26e0) Reply frame received for 5\nI0516 22:19:17.702895 3384 log.go:172] (0xc0007d26e0) Data frame received for 5\nI0516 22:19:17.702939 3384 log.go:172] (0xc000733b80) (5) Data frame handling\nI0516 22:19:17.702965 3384 log.go:172] (0xc000733b80) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 31651\nConnection to 172.17.0.10 31651 port [tcp/31651] succeeded!\nI0516 22:19:17.703472 3384 log.go:172] (0xc0007d26e0) Data frame received for 5\nI0516 22:19:17.703503 3384 log.go:172] (0xc000733b80) (5) Data frame handling\nI0516 22:19:17.703850 3384 log.go:172] (0xc0007d26e0) Data frame received for 3\nI0516 22:19:17.703875 3384 log.go:172] (0xc0007339a0) (3) Data frame handling\nI0516 22:19:17.705332 3384 log.go:172] (0xc0007d26e0) Data frame received for 1\nI0516 22:19:17.705364 3384 log.go:172] (0xc00062e000) (1) Data frame handling\nI0516 22:19:17.705380 3384 log.go:172] (0xc00062e000) (1) Data frame sent\nI0516 22:19:17.705394 3384 log.go:172] (0xc0007d26e0) (0xc00062e000) Stream removed, broadcasting: 1\nI0516 22:19:17.705699 3384 log.go:172] (0xc0007d26e0) (0xc00062e000) Stream removed, broadcasting: 1\nI0516 22:19:17.705717 3384 log.go:172] (0xc0007d26e0) (0xc0007339a0) Stream removed, broadcasting: 3\nI0516 22:19:17.705778 3384 log.go:172] (0xc0007d26e0) Go away received\nI0516 22:19:17.705901 3384 log.go:172] (0xc0007d26e0) (0xc000733b80) Stream removed, broadcasting: 5\n" May 16 22:19:17.710: INFO: stdout: "" May 16 22:19:17.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1361 execpodfsvjj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31651' May 16 22:19:17.943: INFO: stderr: "I0516 22:19:17.867316 3405 log.go:172] (0xc0009d2580) (0xc000986140) Create stream\nI0516 22:19:17.867382 3405 log.go:172] (0xc0009d2580) (0xc000986140) Stream added, broadcasting: 1\nI0516 22:19:17.870104 3405 log.go:172] (0xc0009d2580) Reply frame received for 1\nI0516 22:19:17.870176 3405 log.go:172] (0xc0009d2580) (0xc000385540) Create stream\nI0516 22:19:17.870212 3405 log.go:172] (0xc0009d2580) (0xc000385540) Stream added, broadcasting: 3\nI0516 22:19:17.871086 3405 log.go:172] (0xc0009d2580) Reply frame received for 3\nI0516 22:19:17.871124 3405 log.go:172] (0xc0009d2580) (0xc0009861e0) Create stream\nI0516 22:19:17.871135 3405 log.go:172] (0xc0009d2580) (0xc0009861e0) Stream added, broadcasting: 5\nI0516 22:19:17.872066 3405 log.go:172] (0xc0009d2580) Reply frame received for 5\nI0516 22:19:17.936279 3405 log.go:172] (0xc0009d2580) Data frame received for 3\nI0516 22:19:17.936349 3405 log.go:172] (0xc000385540) (3) Data frame handling\nI0516 22:19:17.936402 3405 log.go:172] (0xc0009d2580) Data frame received for 5\nI0516 22:19:17.936448 3405 log.go:172] (0xc0009861e0) (5) Data frame handling\nI0516 22:19:17.936476 3405 log.go:172] (0xc0009861e0) (5) Data frame sent\nI0516 22:19:17.936494 3405 log.go:172] (0xc0009d2580) Data frame received for 5\nI0516 22:19:17.936511 3405 log.go:172] (0xc0009861e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 31651\nConnection to 172.17.0.8 31651 port [tcp/31651] succeeded!\nI0516 22:19:17.938197 3405 log.go:172] (0xc0009d2580) Data frame received for 1\nI0516 22:19:17.938216 3405 log.go:172] (0xc000986140) (1) Data frame handling\nI0516 22:19:17.938225 3405 log.go:172] (0xc000986140) (1) Data frame sent\nI0516 22:19:17.938240 3405 log.go:172] (0xc0009d2580) (0xc000986140) Stream removed, broadcasting: 1\nI0516 22:19:17.938459 3405 log.go:172] (0xc0009d2580) Go away received\nI0516 22:19:17.938543 3405 log.go:172] (0xc0009d2580) (0xc000986140) Stream removed, broadcasting: 1\nI0516 22:19:17.938564 3405 log.go:172] (0xc0009d2580) (0xc000385540) Stream removed, broadcasting: 3\nI0516 22:19:17.938575 3405 log.go:172] (0xc0009d2580) (0xc0009861e0) Stream removed, broadcasting: 5\n" May 16 22:19:17.943: INFO: stdout: "" May 16 22:19:17.943: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:19:18.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1361" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:13.007 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":244,"skipped":4018,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:19:18.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:19:18.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1942" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":245,"skipped":4051,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:19:18.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 16 22:19:18.262: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ae20c50-a790-4ec5-8445-3725110f806a" in namespace "downward-api-309" to be "success or failure" May 16 22:19:18.275: INFO: Pod "downwardapi-volume-1ae20c50-a790-4ec5-8445-3725110f806a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.311133ms May 16 22:19:20.279: INFO: Pod "downwardapi-volume-1ae20c50-a790-4ec5-8445-3725110f806a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01752391s May 16 22:19:22.283: INFO: Pod "downwardapi-volume-1ae20c50-a790-4ec5-8445-3725110f806a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021459457s May 16 22:19:24.287: INFO: Pod "downwardapi-volume-1ae20c50-a790-4ec5-8445-3725110f806a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025523366s STEP: Saw pod success May 16 22:19:24.287: INFO: Pod "downwardapi-volume-1ae20c50-a790-4ec5-8445-3725110f806a" satisfied condition "success or failure" May 16 22:19:24.291: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1ae20c50-a790-4ec5-8445-3725110f806a container client-container: STEP: delete the pod May 16 22:19:24.321: INFO: Waiting for pod downwardapi-volume-1ae20c50-a790-4ec5-8445-3725110f806a to disappear May 16 22:19:24.326: INFO: Pod downwardapi-volume-1ae20c50-a790-4ec5-8445-3725110f806a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:19:24.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-309" for this suite. • [SLOW TEST:6.201 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4073,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:19:24.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 16 22:19:24.414: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce852cdf-d52b-4b83-a4b9-84a19b4363dd" in namespace "downward-api-6201" to be "success or failure" May 16 22:19:24.424: INFO: Pod "downwardapi-volume-ce852cdf-d52b-4b83-a4b9-84a19b4363dd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.745046ms May 16 22:19:26.429: INFO: Pod "downwardapi-volume-ce852cdf-d52b-4b83-a4b9-84a19b4363dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015587298s May 16 22:19:28.433: INFO: Pod "downwardapi-volume-ce852cdf-d52b-4b83-a4b9-84a19b4363dd": Phase="Running", Reason="", readiness=true. Elapsed: 4.01899029s May 16 22:19:30.438: INFO: Pod "downwardapi-volume-ce852cdf-d52b-4b83-a4b9-84a19b4363dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024162173s STEP: Saw pod success May 16 22:19:30.438: INFO: Pod "downwardapi-volume-ce852cdf-d52b-4b83-a4b9-84a19b4363dd" satisfied condition "success or failure" May 16 22:19:30.441: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-ce852cdf-d52b-4b83-a4b9-84a19b4363dd container client-container: STEP: delete the pod May 16 22:19:30.483: INFO: Waiting for pod downwardapi-volume-ce852cdf-d52b-4b83-a4b9-84a19b4363dd to disappear May 16 22:19:30.494: INFO: Pod downwardapi-volume-ce852cdf-d52b-4b83-a4b9-84a19b4363dd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:19:30.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6201" for this suite. • [SLOW TEST:6.142 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4075,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:19:30.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:19:34.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8061" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4083,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:19:34.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 22:19:34.691: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 16 22:19:36.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5143 create -f -' May 16 22:19:39.963: INFO: stderr: "" May 16 22:19:39.963: INFO: stdout: "e2e-test-crd-publish-openapi-5103-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 16 22:19:39.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5143 delete e2e-test-crd-publish-openapi-5103-crds test-cr' May 16 22:19:40.107: INFO: stderr: "" May 16 22:19:40.107: INFO: stdout: "e2e-test-crd-publish-openapi-5103-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 16 22:19:40.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5143 apply -f -' May 16 22:19:40.368: INFO: stderr: "" May 16 22:19:40.368: INFO: stdout: "e2e-test-crd-publish-openapi-5103-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 16 22:19:40.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5143 delete e2e-test-crd-publish-openapi-5103-crds test-cr' May 16 22:19:40.478: INFO: stderr: "" May 16 22:19:40.478: INFO: stdout: "e2e-test-crd-publish-openapi-5103-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 16 22:19:40.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5103-crds' May 16 22:19:40.703: INFO: stderr: "" May 16 22:19:40.703: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5103-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:19:43.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5143" for this suite. • [SLOW TEST:9.012 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":249,"skipped":4097,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:19:43.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 16 22:19:43.742: INFO: Waiting up to 5m0s for pod "downward-api-91b5581c-c2be-433e-bbad-9f6f4295240b" in namespace "downward-api-1807" to be "success or failure" May 16 22:19:43.746: INFO: Pod "downward-api-91b5581c-c2be-433e-bbad-9f6f4295240b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.000418ms May 16 22:19:45.851: INFO: Pod "downward-api-91b5581c-c2be-433e-bbad-9f6f4295240b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108376712s May 16 22:19:47.862: INFO: Pod "downward-api-91b5581c-c2be-433e-bbad-9f6f4295240b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11991401s STEP: Saw pod success May 16 22:19:47.862: INFO: Pod "downward-api-91b5581c-c2be-433e-bbad-9f6f4295240b" satisfied condition "success or failure" May 16 22:19:47.865: INFO: Trying to get logs from node jerma-worker pod downward-api-91b5581c-c2be-433e-bbad-9f6f4295240b container dapi-container: STEP: delete the pod May 16 22:19:47.998: INFO: Waiting for pod downward-api-91b5581c-c2be-433e-bbad-9f6f4295240b to disappear May 16 22:19:48.003: INFO: Pod downward-api-91b5581c-c2be-433e-bbad-9f6f4295240b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:19:48.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1807" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4104,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:19:48.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 16 22:19:48.059: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 16 22:19:48.071: INFO: Waiting for terminating namespaces to be deleted... May 16 22:19:48.076: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 16 22:19:48.082: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 22:19:48.082: INFO: Container kindnet-cni ready: true, restart count 0 May 16 22:19:48.082: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 22:19:48.082: INFO: Container kube-proxy ready: true, restart count 0 May 16 22:19:48.082: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 16 22:19:48.104: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 22:19:48.104: INFO: Container kube-proxy ready: true, restart count 0 May 16 22:19:48.104: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 16 22:19:48.104: INFO: Container kube-hunter ready: false, restart count 0 May 16 22:19:48.104: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 16 22:19:48.104: INFO: Container kindnet-cni ready: true, restart count 0 May 16 22:19:48.104: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 16 22:19:48.104: INFO: Container kube-bench ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-281bb434-44d2-4eb5-869e-36877e87d579 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-281bb434-44d2-4eb5-869e-36877e87d579 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-281bb434-44d2-4eb5-869e-36877e87d579 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:19:56.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9998" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.276 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":251,"skipped":4112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:19:56.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 16 22:19:56.384: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix029646872/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:19:56.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6692" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":252,"skipped":4150,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:19:56.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 22:19:57.079: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 22:19:59.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264397, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264397, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264397, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264397, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 22:20:01.179: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264397, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264397, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264397, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264397, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 22:20:04.300: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:20:16.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7916" for this suite. STEP: Destroying namespace "webhook-7916-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.101 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":253,"skipped":4158,"failed":0} SSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:20:16.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 16 22:20:20.653: INFO: Pod pod-hostip-34680af0-c183-4b03-98cf-3ab2b4154fa7 has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:20:20.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-244" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4161,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:20:20.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:20:31.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-277" for this suite. • [SLOW TEST:11.228 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":255,"skipped":4174,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:20:31.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 22:20:32.691: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 22:20:34.721: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264432, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264432, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264432, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264432, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 22:20:37.788: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 22:20:37.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:20:38.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7769" for this suite. STEP: Destroying namespace "webhook-7769-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.135 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":256,"skipped":4182,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:20:39.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0516 22:20:40.239333 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 16 22:20:40.239: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:20:40.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9348" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":257,"skipped":4190,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:20:40.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 16 22:20:40.959: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 16 22:20:42.970: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264441, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264441, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264441, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264440, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 22:20:44.974: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264441, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264441, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264441, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264440, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 22:20:48.010: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 22:20:48.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:20:49.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-297" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:9.215 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":258,"skipped":4190,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:20:49.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-b1e9dfb0-ec64-487e-b278-a994765f06fb in namespace container-probe-5622 May 16 22:20:54.068: INFO: Started pod busybox-b1e9dfb0-ec64-487e-b278-a994765f06fb in namespace container-probe-5622 STEP: checking the pod's current state and verifying that restartCount is present May 16 22:20:54.071: INFO: Initial restart count of pod busybox-b1e9dfb0-ec64-487e-b278-a994765f06fb is 0 May 16 22:21:46.180: INFO: Restart count of pod container-probe-5622/busybox-b1e9dfb0-ec64-487e-b278-a994765f06fb is now 1 (52.109361379s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:21:46.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5622" for this suite. • [SLOW TEST:56.767 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4191,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:21:46.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-95fc1210-2cc8-4513-8704-d4f928f5bb12 in namespace container-probe-6991 May 16 22:21:50.331: INFO: Started pod liveness-95fc1210-2cc8-4513-8704-d4f928f5bb12 in namespace container-probe-6991 STEP: checking the pod's current state and verifying that restartCount is present May 16 22:21:50.335: INFO: Initial restart count of pod liveness-95fc1210-2cc8-4513-8704-d4f928f5bb12 is 0 May 16 22:22:08.383: INFO: Restart count of pod container-probe-6991/liveness-95fc1210-2cc8-4513-8704-d4f928f5bb12 is now 1 (18.048878658s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:22:08.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6991" for this suite. • [SLOW TEST:22.229 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4211,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:22:08.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 16 22:22:08.752: INFO: >>> kubeConfig: /root/.kube/config May 16 22:22:11.682: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:22:22.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8857" for this suite. • [SLOW TEST:13.670 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":261,"skipped":4248,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:22:22.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7894 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7894 STEP: creating replication controller externalsvc in namespace services-7894 I0516 22:22:22.306478 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7894, replica count: 2 I0516 22:22:25.356950 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 22:22:28.357355 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 16 22:22:28.405: INFO: Creating new exec pod May 16 22:22:32.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7894 execpod6mhnt -- /bin/sh -x -c nslookup nodeport-service' May 16 22:22:32.712: INFO: stderr: "I0516 22:22:32.584887 3551 log.go:172] (0xc0009926e0) (0xc000a86000) Create stream\nI0516 22:22:32.584951 3551 log.go:172] (0xc0009926e0) (0xc000a86000) Stream added, broadcasting: 1\nI0516 22:22:32.587815 3551 log.go:172] (0xc0009926e0) Reply frame received for 1\nI0516 22:22:32.587867 3551 log.go:172] (0xc0009926e0) (0xc000a860a0) Create stream\nI0516 22:22:32.587885 3551 log.go:172] (0xc0009926e0) (0xc000a860a0) Stream added, broadcasting: 3\nI0516 22:22:32.588716 3551 log.go:172] (0xc0009926e0) Reply frame received for 3\nI0516 22:22:32.588775 3551 log.go:172] (0xc0009926e0) (0xc000a2a000) Create stream\nI0516 22:22:32.588797 3551 log.go:172] (0xc0009926e0) (0xc000a2a000) Stream added, broadcasting: 5\nI0516 22:22:32.590048 3551 log.go:172] (0xc0009926e0) Reply frame received for 5\nI0516 22:22:32.682120 3551 log.go:172] (0xc0009926e0) Data frame received for 5\nI0516 22:22:32.682149 3551 log.go:172] (0xc000a2a000) (5) Data frame handling\nI0516 22:22:32.682184 3551 log.go:172] (0xc000a2a000) (5) Data frame sent\n+ nslookup nodeport-service\nI0516 22:22:32.703695 3551 log.go:172] (0xc0009926e0) Data frame received for 3\nI0516 22:22:32.703733 3551 log.go:172] (0xc000a860a0) (3) Data frame handling\nI0516 22:22:32.703764 3551 log.go:172] (0xc000a860a0) (3) Data frame sent\nI0516 22:22:32.704518 3551 log.go:172] (0xc0009926e0) Data frame received for 3\nI0516 22:22:32.704536 3551 log.go:172] (0xc000a860a0) (3) Data frame handling\nI0516 22:22:32.704557 3551 log.go:172] (0xc000a860a0) (3) Data frame sent\nI0516 22:22:32.705393 3551 log.go:172] (0xc0009926e0) Data frame received for 5\nI0516 22:22:32.705433 3551 log.go:172] (0xc000a2a000) (5) Data frame handling\nI0516 22:22:32.705494 3551 log.go:172] (0xc0009926e0) Data frame received for 3\nI0516 22:22:32.705519 3551 log.go:172] (0xc000a860a0) (3) Data frame handling\nI0516 22:22:32.707120 3551 log.go:172] (0xc0009926e0) Data frame received for 1\nI0516 22:22:32.707159 3551 log.go:172] (0xc000a86000) (1) Data frame handling\nI0516 22:22:32.707177 3551 log.go:172] (0xc000a86000) (1) Data frame sent\nI0516 22:22:32.707193 3551 log.go:172] (0xc0009926e0) (0xc000a86000) Stream removed, broadcasting: 1\nI0516 22:22:32.707237 3551 log.go:172] (0xc0009926e0) Go away received\nI0516 22:22:32.707618 3551 log.go:172] (0xc0009926e0) (0xc000a86000) Stream removed, broadcasting: 1\nI0516 22:22:32.707641 3551 log.go:172] (0xc0009926e0) (0xc000a860a0) Stream removed, broadcasting: 3\nI0516 22:22:32.707653 3551 log.go:172] (0xc0009926e0) (0xc000a2a000) Stream removed, broadcasting: 5\n" May 16 22:22:32.712: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7894.svc.cluster.local\tcanonical name = externalsvc.services-7894.svc.cluster.local.\nName:\texternalsvc.services-7894.svc.cluster.local\nAddress: 10.111.70.229\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7894, will wait for the garbage collector to delete the pods May 16 22:22:32.775: INFO: Deleting ReplicationController externalsvc took: 4.763517ms May 16 22:22:33.175: INFO: Terminating ReplicationController externalsvc pods took: 400.275986ms May 16 22:22:39.601: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:22:39.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7894" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:17.499 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":262,"skipped":4262,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:22:39.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-73afa2d9-d1d7-4088-8dce-06927ed6dd53 STEP: Creating a pod to test consume configMaps May 16 22:22:39.732: INFO: Waiting up to 5m0s for pod "pod-configmaps-f3e47238-59e2-454f-9c68-4acbefa5c721" in namespace "configmap-6280" to be "success or failure" May 16 22:22:39.746: INFO: Pod "pod-configmaps-f3e47238-59e2-454f-9c68-4acbefa5c721": Phase="Pending", Reason="", readiness=false. Elapsed: 13.526157ms May 16 22:22:41.750: INFO: Pod "pod-configmaps-f3e47238-59e2-454f-9c68-4acbefa5c721": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017477736s May 16 22:22:43.755: INFO: Pod "pod-configmaps-f3e47238-59e2-454f-9c68-4acbefa5c721": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022204621s STEP: Saw pod success May 16 22:22:43.755: INFO: Pod "pod-configmaps-f3e47238-59e2-454f-9c68-4acbefa5c721" satisfied condition "success or failure" May 16 22:22:43.758: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-f3e47238-59e2-454f-9c68-4acbefa5c721 container configmap-volume-test: STEP: delete the pod May 16 22:22:43.786: INFO: Waiting for pod pod-configmaps-f3e47238-59e2-454f-9c68-4acbefa5c721 to disappear May 16 22:22:43.790: INFO: Pod pod-configmaps-f3e47238-59e2-454f-9c68-4acbefa5c721 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:22:43.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6280" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4291,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:22:43.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 16 22:22:43.879: INFO: Waiting up to 5m0s for pod "downward-api-a5b58971-ca37-4c25-9f19-43cb184c440a" in namespace "downward-api-8100" to be "success or failure" May 16 22:22:43.887: INFO: Pod "downward-api-a5b58971-ca37-4c25-9f19-43cb184c440a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.909648ms May 16 22:22:45.931: INFO: Pod "downward-api-a5b58971-ca37-4c25-9f19-43cb184c440a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052439041s May 16 22:22:47.936: INFO: Pod "downward-api-a5b58971-ca37-4c25-9f19-43cb184c440a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05713424s May 16 22:22:49.940: INFO: Pod "downward-api-a5b58971-ca37-4c25-9f19-43cb184c440a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061775503s STEP: Saw pod success May 16 22:22:49.940: INFO: Pod "downward-api-a5b58971-ca37-4c25-9f19-43cb184c440a" satisfied condition "success or failure" May 16 22:22:49.944: INFO: Trying to get logs from node jerma-worker2 pod downward-api-a5b58971-ca37-4c25-9f19-43cb184c440a container dapi-container: STEP: delete the pod May 16 22:22:49.978: INFO: Waiting for pod downward-api-a5b58971-ca37-4c25-9f19-43cb184c440a to disappear May 16 22:22:49.982: INFO: Pod downward-api-a5b58971-ca37-4c25-9f19-43cb184c440a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:22:49.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8100" for this suite. • [SLOW TEST:6.192 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4300,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:22:49.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-6bdeb8a6-299e-4474-b8f5-13d24ada9348 STEP: Creating a pod to test consume secrets May 16 22:22:50.092: INFO: Waiting up to 5m0s for pod "pod-secrets-3df980cc-36a3-4095-8e60-00aaa9ef0d95" in namespace "secrets-3602" to be "success or failure" May 16 22:22:50.111: INFO: Pod "pod-secrets-3df980cc-36a3-4095-8e60-00aaa9ef0d95": Phase="Pending", Reason="", readiness=false. Elapsed: 18.238209ms May 16 22:22:52.114: INFO: Pod "pod-secrets-3df980cc-36a3-4095-8e60-00aaa9ef0d95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02185143s May 16 22:22:54.118: INFO: Pod "pod-secrets-3df980cc-36a3-4095-8e60-00aaa9ef0d95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025430438s STEP: Saw pod success May 16 22:22:54.118: INFO: Pod "pod-secrets-3df980cc-36a3-4095-8e60-00aaa9ef0d95" satisfied condition "success or failure" May 16 22:22:54.121: INFO: Trying to get logs from node jerma-worker pod pod-secrets-3df980cc-36a3-4095-8e60-00aaa9ef0d95 container secret-env-test: STEP: delete the pod May 16 22:22:54.195: INFO: Waiting for pod pod-secrets-3df980cc-36a3-4095-8e60-00aaa9ef0d95 to disappear May 16 22:22:54.302: INFO: Pod pod-secrets-3df980cc-36a3-4095-8e60-00aaa9ef0d95 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:22:54.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3602" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4347,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:22:54.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 16 22:22:54.401: INFO: Waiting up to 5m0s for pod "client-containers-b6e0bbf4-4123-4a26-8488-26e276211712" in namespace "containers-1272" to be "success or failure" May 16 22:22:54.422: INFO: Pod "client-containers-b6e0bbf4-4123-4a26-8488-26e276211712": Phase="Pending", Reason="", readiness=false. Elapsed: 21.134108ms May 16 22:22:56.425: INFO: Pod "client-containers-b6e0bbf4-4123-4a26-8488-26e276211712": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024775287s May 16 22:22:58.429: INFO: Pod "client-containers-b6e0bbf4-4123-4a26-8488-26e276211712": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028424802s STEP: Saw pod success May 16 22:22:58.429: INFO: Pod "client-containers-b6e0bbf4-4123-4a26-8488-26e276211712" satisfied condition "success or failure" May 16 22:22:58.431: INFO: Trying to get logs from node jerma-worker pod client-containers-b6e0bbf4-4123-4a26-8488-26e276211712 container test-container: STEP: delete the pod May 16 22:22:58.483: INFO: Waiting for pod client-containers-b6e0bbf4-4123-4a26-8488-26e276211712 to disappear May 16 22:22:58.488: INFO: Pod client-containers-b6e0bbf4-4123-4a26-8488-26e276211712 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:22:58.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1272" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4356,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:22:58.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 16 22:22:58.555: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:22:58.559: INFO: Number of nodes with available pods: 0 May 16 22:22:58.559: INFO: Node jerma-worker is running more than one daemon pod May 16 22:22:59.565: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:22:59.568: INFO: Number of nodes with available pods: 0 May 16 22:22:59.568: INFO: Node jerma-worker is running more than one daemon pod May 16 22:23:00.643: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:23:00.647: INFO: Number of nodes with available pods: 0 May 16 22:23:00.647: INFO: Node jerma-worker is running more than one daemon pod May 16 22:23:01.709: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:23:01.713: INFO: Number of nodes with available pods: 0 May 16 22:23:01.713: INFO: Node jerma-worker is running more than one daemon pod May 16 22:23:02.565: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:23:02.568: INFO: Number of nodes with available pods: 1 May 16 22:23:02.568: INFO: Node jerma-worker2 is running more than one daemon pod May 16 22:23:03.565: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:23:03.570: INFO: Number of nodes with available pods: 2 May 16 22:23:03.570: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 16 22:23:03.643: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:23:03.646: INFO: Number of nodes with available pods: 1 May 16 22:23:03.646: INFO: Node jerma-worker2 is running more than one daemon pod May 16 22:23:04.652: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:23:04.655: INFO: Number of nodes with available pods: 1 May 16 22:23:04.656: INFO: Node jerma-worker2 is running more than one daemon pod May 16 22:23:05.663: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:23:05.667: INFO: Number of nodes with available pods: 1 May 16 22:23:05.667: INFO: Node jerma-worker2 is running more than one daemon pod May 16 22:23:06.652: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:23:06.656: INFO: Number of nodes with available pods: 1 May 16 22:23:06.656: INFO: Node jerma-worker2 is running more than one daemon pod May 16 22:23:07.651: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:23:07.660: INFO: Number of nodes with available pods: 1 May 16 22:23:07.660: INFO: Node jerma-worker2 is running more than one daemon pod May 16 22:23:08.652: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:23:08.656: INFO: Number of nodes with available pods: 1 May 16 22:23:08.656: INFO: Node jerma-worker2 is running more than one daemon pod May 16 22:23:09.652: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:23:09.679: INFO: Number of nodes with available pods: 1 May 16 22:23:09.679: INFO: Node jerma-worker2 is running more than one daemon pod May 16 22:23:10.652: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:23:10.674: INFO: Number of nodes with available pods: 1 May 16 22:23:10.674: INFO: Node jerma-worker2 is running more than one daemon pod May 16 22:23:11.652: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:23:11.655: INFO: Number of nodes with available pods: 1 May 16 22:23:11.655: INFO: Node jerma-worker2 is running more than one daemon pod May 16 22:23:12.652: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:23:12.656: INFO: Number of nodes with available pods: 1 May 16 22:23:12.656: INFO: Node jerma-worker2 is running more than one daemon pod May 16 22:23:13.652: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 22:23:13.656: INFO: Number of nodes with available pods: 2 May 16 22:23:13.656: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4712, will wait for the garbage collector to delete the pods May 16 22:23:13.718: INFO: Deleting DaemonSet.extensions daemon-set took: 5.981344ms May 16 22:23:14.018: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.257995ms May 16 22:23:19.322: INFO: Number of nodes with available pods: 0 May 16 22:23:19.322: INFO: Number of running nodes: 0, number of available pods: 0 May 16 22:23:19.356: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4712/daemonsets","resourceVersion":"16754406"},"items":null} May 16 22:23:19.358: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4712/pods","resourceVersion":"16754406"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:23:19.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4712" for this suite. • [SLOW TEST:20.878 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":267,"skipped":4362,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:23:19.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 22:23:19.413: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 16 22:23:22.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7815 create -f -' May 16 22:23:25.782: INFO: stderr: "" May 16 22:23:25.782: INFO: stdout: "e2e-test-crd-publish-openapi-3617-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 16 22:23:25.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7815 delete e2e-test-crd-publish-openapi-3617-crds test-foo' May 16 22:23:25.919: INFO: stderr: "" May 16 22:23:25.919: INFO: stdout: "e2e-test-crd-publish-openapi-3617-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 16 22:23:25.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7815 apply -f -' May 16 22:23:26.167: INFO: stderr: "" May 16 22:23:26.167: INFO: stdout: "e2e-test-crd-publish-openapi-3617-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 16 22:23:26.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7815 delete e2e-test-crd-publish-openapi-3617-crds test-foo' May 16 22:23:26.275: INFO: stderr: "" May 16 22:23:26.275: INFO: stdout: "e2e-test-crd-publish-openapi-3617-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 16 22:23:26.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7815 create -f -' May 16 22:23:26.531: INFO: rc: 1 May 16 22:23:26.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7815 apply -f -' May 16 22:23:26.780: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 16 22:23:26.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7815 create -f -' May 16 22:23:26.998: INFO: rc: 1 May 16 22:23:26.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7815 apply -f -' May 16 22:23:27.260: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 16 22:23:27.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3617-crds' May 16 22:23:27.492: INFO: stderr: "" May 16 22:23:27.492: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3617-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 16 22:23:27.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3617-crds.metadata' May 16 22:23:27.710: INFO: stderr: "" May 16 22:23:27.710: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3617-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 16 22:23:27.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3617-crds.spec' May 16 22:23:27.925: INFO: stderr: "" May 16 22:23:27.925: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3617-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 16 22:23:27.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3617-crds.spec.bars' May 16 22:23:28.176: INFO: stderr: "" May 16 22:23:28.176: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3617-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 16 22:23:28.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3617-crds.spec.bars2' May 16 22:23:28.407: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:23:30.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7815" for this suite. • [SLOW TEST:10.934 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":268,"skipped":4383,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:23:30.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 16 22:23:30.394: INFO: Waiting up to 5m0s for pod "pod-bf3ce499-03fe-48de-a173-4baeda55b20c" in namespace "emptydir-4426" to be "success or failure" May 16 22:23:30.397: INFO: Pod "pod-bf3ce499-03fe-48de-a173-4baeda55b20c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.281772ms May 16 22:23:32.439: INFO: Pod "pod-bf3ce499-03fe-48de-a173-4baeda55b20c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045553971s May 16 22:23:34.444: INFO: Pod "pod-bf3ce499-03fe-48de-a173-4baeda55b20c": Phase="Running", Reason="", readiness=true. Elapsed: 4.050007525s May 16 22:23:36.447: INFO: Pod "pod-bf3ce499-03fe-48de-a173-4baeda55b20c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053703487s STEP: Saw pod success May 16 22:23:36.447: INFO: Pod "pod-bf3ce499-03fe-48de-a173-4baeda55b20c" satisfied condition "success or failure" May 16 22:23:36.450: INFO: Trying to get logs from node jerma-worker pod pod-bf3ce499-03fe-48de-a173-4baeda55b20c container test-container: STEP: delete the pod May 16 22:23:36.472: INFO: Waiting for pod pod-bf3ce499-03fe-48de-a173-4baeda55b20c to disappear May 16 22:23:36.476: INFO: Pod pod-bf3ce499-03fe-48de-a173-4baeda55b20c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:23:36.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4426" for this suite. • [SLOW TEST:6.172 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4393,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:23:36.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 16 22:23:36.536: INFO: Waiting up to 5m0s for pod "pod-280586fe-0a71-4411-ae2d-f833b65dc8f6" in namespace "emptydir-2020" to be "success or failure" May 16 22:23:36.578: INFO: Pod "pod-280586fe-0a71-4411-ae2d-f833b65dc8f6": Phase="Pending", Reason="", readiness=false. Elapsed: 42.249563ms May 16 22:23:38.583: INFO: Pod "pod-280586fe-0a71-4411-ae2d-f833b65dc8f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046715816s May 16 22:23:40.587: INFO: Pod "pod-280586fe-0a71-4411-ae2d-f833b65dc8f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050370408s STEP: Saw pod success May 16 22:23:40.587: INFO: Pod "pod-280586fe-0a71-4411-ae2d-f833b65dc8f6" satisfied condition "success or failure" May 16 22:23:40.589: INFO: Trying to get logs from node jerma-worker pod pod-280586fe-0a71-4411-ae2d-f833b65dc8f6 container test-container: STEP: delete the pod May 16 22:23:40.631: INFO: Waiting for pod pod-280586fe-0a71-4411-ae2d-f833b65dc8f6 to disappear May 16 22:23:40.642: INFO: Pod pod-280586fe-0a71-4411-ae2d-f833b65dc8f6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:23:40.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2020" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4407,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:23:40.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 16 22:23:40.737: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b56aeeef-67e0-41e5-8efe-f1596917aaf9" in namespace "projected-615" to be "success or failure" May 16 22:23:40.740: INFO: Pod "downwardapi-volume-b56aeeef-67e0-41e5-8efe-f1596917aaf9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.393767ms May 16 22:23:42.758: INFO: Pod "downwardapi-volume-b56aeeef-67e0-41e5-8efe-f1596917aaf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021402655s May 16 22:23:44.762: INFO: Pod "downwardapi-volume-b56aeeef-67e0-41e5-8efe-f1596917aaf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025351542s STEP: Saw pod success May 16 22:23:44.762: INFO: Pod "downwardapi-volume-b56aeeef-67e0-41e5-8efe-f1596917aaf9" satisfied condition "success or failure" May 16 22:23:44.765: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b56aeeef-67e0-41e5-8efe-f1596917aaf9 container client-container: STEP: delete the pod May 16 22:23:44.801: INFO: Waiting for pod downwardapi-volume-b56aeeef-67e0-41e5-8efe-f1596917aaf9 to disappear May 16 22:23:44.829: INFO: Pod downwardapi-volume-b56aeeef-67e0-41e5-8efe-f1596917aaf9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:23:44.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-615" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4421,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:23:44.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8160.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8160.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8160.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8160.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8160.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8160.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8160.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8160.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8160.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8160.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8160.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 120.25.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.25.120_udp@PTR;check="$$(dig +tcp +noall +answer +search 120.25.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.25.120_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8160.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8160.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8160.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8160.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8160.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8160.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8160.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8160.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8160.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8160.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8160.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 120.25.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.25.120_udp@PTR;check="$$(dig +tcp +noall +answer +search 120.25.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.25.120_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 22:23:51.028: INFO: Unable to read wheezy_udp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:23:51.032: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:23:51.035: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:23:51.037: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:23:51.056: INFO: Unable to read jessie_udp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:23:51.058: INFO: Unable to read jessie_tcp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:23:51.060: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:23:51.062: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:23:51.076: INFO: Lookups using dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff failed for: [wheezy_udp@dns-test-service.dns-8160.svc.cluster.local wheezy_tcp@dns-test-service.dns-8160.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local jessie_udp@dns-test-service.dns-8160.svc.cluster.local jessie_tcp@dns-test-service.dns-8160.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local] May 16 22:23:56.083: INFO: Unable to read wheezy_udp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:23:56.086: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:23:56.089: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:23:56.092: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:23:56.111: INFO: Unable to read jessie_udp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:23:56.113: INFO: Unable to read jessie_tcp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:23:56.116: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:23:56.119: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:23:56.137: INFO: Lookups using dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff failed for: [wheezy_udp@dns-test-service.dns-8160.svc.cluster.local wheezy_tcp@dns-test-service.dns-8160.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local jessie_udp@dns-test-service.dns-8160.svc.cluster.local jessie_tcp@dns-test-service.dns-8160.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local] May 16 22:24:01.081: INFO: Unable to read wheezy_udp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:01.085: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:01.088: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:01.091: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:01.111: INFO: Unable to read jessie_udp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:01.114: INFO: Unable to read jessie_tcp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:01.116: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:01.119: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:01.136: INFO: Lookups using dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff failed for: [wheezy_udp@dns-test-service.dns-8160.svc.cluster.local wheezy_tcp@dns-test-service.dns-8160.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local jessie_udp@dns-test-service.dns-8160.svc.cluster.local jessie_tcp@dns-test-service.dns-8160.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local] May 16 22:24:06.082: INFO: Unable to read wheezy_udp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:06.086: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:06.089: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:06.092: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:06.115: INFO: Unable to read jessie_udp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:06.118: INFO: Unable to read jessie_tcp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:06.120: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:06.123: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:06.143: INFO: Lookups using dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff failed for: [wheezy_udp@dns-test-service.dns-8160.svc.cluster.local wheezy_tcp@dns-test-service.dns-8160.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local jessie_udp@dns-test-service.dns-8160.svc.cluster.local jessie_tcp@dns-test-service.dns-8160.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local] May 16 22:24:11.082: INFO: Unable to read wheezy_udp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:11.086: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:11.090: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:11.093: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:11.115: INFO: Unable to read jessie_udp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:11.118: INFO: Unable to read jessie_tcp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:11.121: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:11.123: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:11.142: INFO: Lookups using dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff failed for: [wheezy_udp@dns-test-service.dns-8160.svc.cluster.local wheezy_tcp@dns-test-service.dns-8160.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local jessie_udp@dns-test-service.dns-8160.svc.cluster.local jessie_tcp@dns-test-service.dns-8160.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local] May 16 22:24:16.081: INFO: Unable to read wheezy_udp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:16.085: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:16.089: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:16.092: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:16.112: INFO: Unable to read jessie_udp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:16.115: INFO: Unable to read jessie_tcp@dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:16.118: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:16.121: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff: the server could not find the requested resource (get pods dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff) May 16 22:24:16.139: INFO: Lookups using dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff failed for: [wheezy_udp@dns-test-service.dns-8160.svc.cluster.local wheezy_tcp@dns-test-service.dns-8160.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local jessie_udp@dns-test-service.dns-8160.svc.cluster.local jessie_tcp@dns-test-service.dns-8160.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8160.svc.cluster.local] May 16 22:24:21.149: INFO: DNS probes using dns-8160/dns-test-571801eb-d7ed-48aa-a3b7-adbf6b70dfff succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:24:21.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8160" for this suite. • [SLOW TEST:37.163 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":272,"skipped":4422,"failed":0} [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:24:21.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 16 22:24:22.185: INFO: Waiting up to 5m0s for pod "downward-api-7fde5f99-d5c5-4822-ab05-6d8f752b6c7f" in namespace "downward-api-1206" to be "success or failure" May 16 22:24:22.249: INFO: Pod "downward-api-7fde5f99-d5c5-4822-ab05-6d8f752b6c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 63.754048ms May 16 22:24:24.345: INFO: Pod "downward-api-7fde5f99-d5c5-4822-ab05-6d8f752b6c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159571708s May 16 22:24:26.350: INFO: Pod "downward-api-7fde5f99-d5c5-4822-ab05-6d8f752b6c7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.164326716s STEP: Saw pod success May 16 22:24:26.350: INFO: Pod "downward-api-7fde5f99-d5c5-4822-ab05-6d8f752b6c7f" satisfied condition "success or failure" May 16 22:24:26.353: INFO: Trying to get logs from node jerma-worker pod downward-api-7fde5f99-d5c5-4822-ab05-6d8f752b6c7f container dapi-container: STEP: delete the pod May 16 22:24:26.585: INFO: Waiting for pod downward-api-7fde5f99-d5c5-4822-ab05-6d8f752b6c7f to disappear May 16 22:24:26.588: INFO: Pod downward-api-7fde5f99-d5c5-4822-ab05-6d8f752b6c7f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:24:26.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1206" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4422,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:24:26.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ff1dab28-d86b-4234-b4fc-c33e82857c06 STEP: Creating a pod to test consume secrets May 16 22:24:26.678: INFO: Waiting up to 5m0s for pod "pod-secrets-089ac1e8-24f2-40f2-ae94-93817fcf605d" in namespace "secrets-2939" to be "success or failure" May 16 22:24:26.682: INFO: Pod "pod-secrets-089ac1e8-24f2-40f2-ae94-93817fcf605d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032717ms May 16 22:24:28.685: INFO: Pod "pod-secrets-089ac1e8-24f2-40f2-ae94-93817fcf605d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007349299s May 16 22:24:30.710: INFO: Pod "pod-secrets-089ac1e8-24f2-40f2-ae94-93817fcf605d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032827012s STEP: Saw pod success May 16 22:24:30.711: INFO: Pod "pod-secrets-089ac1e8-24f2-40f2-ae94-93817fcf605d" satisfied condition "success or failure" May 16 22:24:30.713: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-089ac1e8-24f2-40f2-ae94-93817fcf605d container secret-volume-test: STEP: delete the pod May 16 22:24:30.775: INFO: Waiting for pod pod-secrets-089ac1e8-24f2-40f2-ae94-93817fcf605d to disappear May 16 22:24:30.802: INFO: Pod pod-secrets-089ac1e8-24f2-40f2-ae94-93817fcf605d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:24:30.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2939" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4451,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:24:30.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 22:24:31.571: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 22:24:33.591: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264671, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264671, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264671, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264671, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 22:24:36.624: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 16 22:24:36.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7537-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:24:37.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6837" for this suite. STEP: Destroying namespace "webhook-6837-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.018 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":275,"skipped":4481,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:24:37.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 22:24:39.185: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 22:24:41.195: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264679, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264679, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264679, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264679, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 22:24:44.282: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:24:44.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5883" for this suite. STEP: Destroying namespace "webhook-5883-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.745 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":276,"skipped":4522,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:24:44.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-14976240-afcc-4f91-8c22-bf582e2c8509 STEP: Creating a pod to test consume secrets May 16 22:24:44.755: INFO: Waiting up to 5m0s for pod "pod-secrets-8a77768a-7dd8-4393-a73b-44863adda6b5" in namespace "secrets-957" to be "success or failure" May 16 22:24:44.987: INFO: Pod "pod-secrets-8a77768a-7dd8-4393-a73b-44863adda6b5": Phase="Pending", Reason="", readiness=false. Elapsed: 232.199774ms May 16 22:24:46.991: INFO: Pod "pod-secrets-8a77768a-7dd8-4393-a73b-44863adda6b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235393875s May 16 22:24:48.994: INFO: Pod "pod-secrets-8a77768a-7dd8-4393-a73b-44863adda6b5": Phase="Running", Reason="", readiness=true. Elapsed: 4.239177212s May 16 22:24:50.999: INFO: Pod "pod-secrets-8a77768a-7dd8-4393-a73b-44863adda6b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.243838892s STEP: Saw pod success May 16 22:24:50.999: INFO: Pod "pod-secrets-8a77768a-7dd8-4393-a73b-44863adda6b5" satisfied condition "success or failure" May 16 22:24:51.003: INFO: Trying to get logs from node jerma-worker pod pod-secrets-8a77768a-7dd8-4393-a73b-44863adda6b5 container secret-volume-test: STEP: delete the pod May 16 22:24:51.024: INFO: Waiting for pod pod-secrets-8a77768a-7dd8-4393-a73b-44863adda6b5 to disappear May 16 22:24:51.028: INFO: Pod pod-secrets-8a77768a-7dd8-4393-a73b-44863adda6b5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:24:51.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-957" for this suite. • [SLOW TEST:6.434 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4524,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 16 22:24:51.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 22:24:51.795: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 22:24:53.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264691, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264691, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264691, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725264691, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 22:24:56.936: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 16 22:24:56.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5578" for this suite. STEP: Destroying namespace "webhook-5578-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.015 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":278,"skipped":4537,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSMay 16 22:24:57.053: INFO: Running AfterSuite actions on all nodes May 16 22:24:57.053: INFO: Running AfterSuite actions on node 1 May 16 22:24:57.053: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4454.615 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS