I0514 21:09:52.697988 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0514 21:09:52.698208 6 e2e.go:109] Starting e2e run "1ca4ff6d-d3f2-43f3-b99d-b6d492fdc766" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589490591 - Will randomize all specs Will run 278 of 4842 specs May 14 21:09:52.763: INFO: >>> kubeConfig: /root/.kube/config May 14 21:09:52.768: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 14 21:09:52.791: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 14 21:09:52.823: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 14 21:09:52.823: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 14 21:09:52.823: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 14 21:09:52.835: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 14 21:09:52.835: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 14 21:09:52.835: INFO: e2e test version: v1.17.4 May 14 21:09:52.836: INFO: kube-apiserver version: v1.17.2 May 14 21:09:52.836: INFO: >>> kubeConfig: /root/.kube/config May 14 21:09:52.840: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:09:52.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch May 14 21:09:52.916: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 14 21:09:52.967: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1565 /api/v1/namespaces/watch-1565/configmaps/e2e-watch-test-label-changed 3fa3da23-5b48-4ad6-a8ea-6f139bb01e54 16198582 0 2020-05-14 21:09:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 14 21:09:52.967: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1565 /api/v1/namespaces/watch-1565/configmaps/e2e-watch-test-label-changed 3fa3da23-5b48-4ad6-a8ea-6f139bb01e54 16198583 0 2020-05-14 21:09:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 14 21:09:52.967: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1565 /api/v1/namespaces/watch-1565/configmaps/e2e-watch-test-label-changed 3fa3da23-5b48-4ad6-a8ea-6f139bb01e54 16198584 0 2020-05-14 21:09:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 14 21:10:03.018: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1565 /api/v1/namespaces/watch-1565/configmaps/e2e-watch-test-label-changed 3fa3da23-5b48-4ad6-a8ea-6f139bb01e54 16198613 0 2020-05-14 21:09:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 14 21:10:03.018: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1565 /api/v1/namespaces/watch-1565/configmaps/e2e-watch-test-label-changed 3fa3da23-5b48-4ad6-a8ea-6f139bb01e54 16198614 0 2020-05-14 21:09:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 14 21:10:03.018: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1565 /api/v1/namespaces/watch-1565/configmaps/e2e-watch-test-label-changed 3fa3da23-5b48-4ad6-a8ea-6f139bb01e54 16198615 0 2020-05-14 21:09:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:10:03.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1565" for this suite. • [SLOW TEST:10.216 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":1,"skipped":18,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:10:03.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 14 21:10:03.652: INFO: created pod pod-service-account-defaultsa May 14 21:10:03.652: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 14 21:10:03.660: INFO: created pod pod-service-account-mountsa May 14 21:10:03.660: INFO: pod pod-service-account-mountsa service account token volume mount: true May 14 21:10:03.667: INFO: created pod pod-service-account-nomountsa May 14 21:10:03.667: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 14 21:10:03.699: INFO: created pod pod-service-account-defaultsa-mountspec May 14 21:10:03.699: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 14 21:10:03.708: INFO: created pod pod-service-account-mountsa-mountspec May 14 21:10:03.708: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 14 21:10:03.797: INFO: created pod pod-service-account-nomountsa-mountspec May 14 21:10:03.797: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 14 21:10:03.802: INFO: created pod pod-service-account-defaultsa-nomountspec May 14 21:10:03.802: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 14 21:10:03.858: INFO: created pod pod-service-account-mountsa-nomountspec May 14 21:10:03.858: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 14 21:10:03.888: INFO: created pod pod-service-account-nomountsa-nomountspec May 14 21:10:03.888: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:10:03.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7261" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":2,"skipped":51,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:10:04.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-3fea53ab-57c6-45c9-b5bd-6d41a41147cb STEP: Creating a pod to test consume configMaps May 14 21:10:04.196: INFO: Waiting up to 5m0s for pod "pod-configmaps-ad38d26b-aa43-49d4-8eba-f8b27788e578" in namespace "configmap-2939" to be "success or failure" May 14 21:10:04.198: INFO: Pod "pod-configmaps-ad38d26b-aa43-49d4-8eba-f8b27788e578": Phase="Pending", Reason="", readiness=false. Elapsed: 2.852962ms May 14 21:10:06.304: INFO: Pod "pod-configmaps-ad38d26b-aa43-49d4-8eba-f8b27788e578": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107962802s May 14 21:10:08.315: INFO: Pod "pod-configmaps-ad38d26b-aa43-49d4-8eba-f8b27788e578": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119095846s May 14 21:10:10.549: INFO: Pod "pod-configmaps-ad38d26b-aa43-49d4-8eba-f8b27788e578": Phase="Pending", Reason="", readiness=false. Elapsed: 6.353749757s May 14 21:10:12.933: INFO: Pod "pod-configmaps-ad38d26b-aa43-49d4-8eba-f8b27788e578": Phase="Pending", Reason="", readiness=false. Elapsed: 8.737635517s May 14 21:10:15.179: INFO: Pod "pod-configmaps-ad38d26b-aa43-49d4-8eba-f8b27788e578": Phase="Pending", Reason="", readiness=false. Elapsed: 10.983475861s May 14 21:10:17.476: INFO: Pod "pod-configmaps-ad38d26b-aa43-49d4-8eba-f8b27788e578": Phase="Pending", Reason="", readiness=false. Elapsed: 13.280834705s May 14 21:10:19.572: INFO: Pod "pod-configmaps-ad38d26b-aa43-49d4-8eba-f8b27788e578": Phase="Running", Reason="", readiness=true. Elapsed: 15.376885341s May 14 21:10:21.577: INFO: Pod "pod-configmaps-ad38d26b-aa43-49d4-8eba-f8b27788e578": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.381347258s STEP: Saw pod success May 14 21:10:21.577: INFO: Pod "pod-configmaps-ad38d26b-aa43-49d4-8eba-f8b27788e578" satisfied condition "success or failure" May 14 21:10:21.580: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-ad38d26b-aa43-49d4-8eba-f8b27788e578 container configmap-volume-test: STEP: delete the pod May 14 21:10:21.626: INFO: Waiting for pod pod-configmaps-ad38d26b-aa43-49d4-8eba-f8b27788e578 to disappear May 14 21:10:21.642: INFO: Pod pod-configmaps-ad38d26b-aa43-49d4-8eba-f8b27788e578 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:10:21.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2939" for this suite. • [SLOW TEST:17.635 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":52,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:10:21.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 21:10:22.158: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 21:10:24.167: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725087422, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725087422, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725087422, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725087422, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 21:10:26.171: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725087422, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725087422, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725087422, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725087422, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 21:10:29.226: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:10:29.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3159-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:10:30.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-475" for this suite. STEP: Destroying namespace "webhook-475-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.521 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":4,"skipped":128,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:10:30.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-2404b3e7-1ff1-4221-a8db-328faf9a5e72 STEP: Creating a pod to test consume configMaps May 14 21:10:30.278: INFO: Waiting up to 5m0s for pod "pod-configmaps-3eeb53bd-f055-42fb-babe-da53624091a7" in namespace "configmap-9057" to be "success or failure" May 14 21:10:30.301: INFO: Pod "pod-configmaps-3eeb53bd-f055-42fb-babe-da53624091a7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.957324ms May 14 21:10:32.305: INFO: Pod "pod-configmaps-3eeb53bd-f055-42fb-babe-da53624091a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026259816s May 14 21:10:34.309: INFO: Pod "pod-configmaps-3eeb53bd-f055-42fb-babe-da53624091a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030930981s STEP: Saw pod success May 14 21:10:34.309: INFO: Pod "pod-configmaps-3eeb53bd-f055-42fb-babe-da53624091a7" satisfied condition "success or failure" May 14 21:10:34.313: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-3eeb53bd-f055-42fb-babe-da53624091a7 container configmap-volume-test: STEP: delete the pod May 14 21:10:34.503: INFO: Waiting for pod pod-configmaps-3eeb53bd-f055-42fb-babe-da53624091a7 to disappear May 14 21:10:34.577: INFO: Pod pod-configmaps-3eeb53bd-f055-42fb-babe-da53624091a7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:10:34.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9057" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":129,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:10:34.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-37c49cf6-76fb-45f6-8682-3f22a0cf0c1a STEP: Creating configMap with name cm-test-opt-upd-83c82307-ec03-4529-ba95-b787a146d5a9 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-37c49cf6-76fb-45f6-8682-3f22a0cf0c1a STEP: Updating configmap cm-test-opt-upd-83c82307-ec03-4529-ba95-b787a146d5a9 STEP: Creating configMap with name cm-test-opt-create-369aabd0-32e1-470d-a2cf-286ebeb77568 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:10:44.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8410" for this suite. • [SLOW TEST:10.290 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":138,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:10:44.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-fcf0b469-89c6-4810-b56d-c26e3283d85f in namespace container-probe-1264 May 14 21:10:48.979: INFO: Started pod test-webserver-fcf0b469-89c6-4810-b56d-c26e3283d85f in namespace container-probe-1264 STEP: checking the pod's current state and verifying that restartCount is present May 14 21:10:48.982: INFO: Initial restart count of pod test-webserver-fcf0b469-89c6-4810-b56d-c26e3283d85f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:14:49.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1264" for this suite. • [SLOW TEST:244.818 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":154,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:14:49.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:15:49.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8603" for this suite. • [SLOW TEST:60.181 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":183,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:15:49.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 14 21:15:49.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5393' May 14 21:15:52.565: INFO: stderr: "" May 14 21:15:52.566: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 14 21:15:53.686: INFO: Selector matched 1 pods for map[app:agnhost] May 14 21:15:53.686: INFO: Found 0 / 1 May 14 21:15:54.570: INFO: Selector matched 1 pods for map[app:agnhost] May 14 21:15:54.570: INFO: Found 0 / 1 May 14 21:15:55.704: INFO: Selector matched 1 pods for map[app:agnhost] May 14 21:15:55.704: INFO: Found 0 / 1 May 14 21:15:56.570: INFO: Selector matched 1 pods for map[app:agnhost] May 14 21:15:56.570: INFO: Found 0 / 1 May 14 21:15:57.570: INFO: Selector matched 1 pods for map[app:agnhost] May 14 21:15:57.570: INFO: Found 1 / 1 May 14 21:15:57.570: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 14 21:15:57.573: INFO: Selector matched 1 pods for map[app:agnhost] May 14 21:15:57.573: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 14 21:15:57.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-bdrxp --namespace=kubectl-5393 -p {"metadata":{"annotations":{"x":"y"}}}' May 14 21:15:57.669: INFO: stderr: "" May 14 21:15:57.669: INFO: stdout: "pod/agnhost-master-bdrxp patched\n" STEP: checking annotations May 14 21:15:57.671: INFO: Selector matched 1 pods for map[app:agnhost] May 14 21:15:57.671: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:15:57.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5393" for this suite. • [SLOW TEST:7.799 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1432 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":9,"skipped":199,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:15:57.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 14 21:15:57.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9965' May 14 21:15:58.032: INFO: stderr: "" May 14 21:15:58.032: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 21:15:58.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9965' May 14 21:15:58.121: INFO: stderr: "" May 14 21:15:58.121: INFO: stdout: "update-demo-nautilus-cscjr update-demo-nautilus-ll8rm " May 14 21:15:58.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cscjr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9965' May 14 21:15:58.209: INFO: stderr: "" May 14 21:15:58.209: INFO: stdout: "" May 14 21:15:58.209: INFO: update-demo-nautilus-cscjr is created but not running May 14 21:16:03.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9965' May 14 21:16:03.452: INFO: stderr: "" May 14 21:16:03.452: INFO: stdout: "update-demo-nautilus-cscjr update-demo-nautilus-ll8rm " May 14 21:16:03.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cscjr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9965' May 14 21:16:03.621: INFO: stderr: "" May 14 21:16:03.621: INFO: stdout: "true" May 14 21:16:03.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cscjr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9965' May 14 21:16:03.782: INFO: stderr: "" May 14 21:16:03.782: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 21:16:03.782: INFO: validating pod update-demo-nautilus-cscjr May 14 21:16:03.804: INFO: got data: { "image": "nautilus.jpg" } May 14 21:16:03.804: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 21:16:03.804: INFO: update-demo-nautilus-cscjr is verified up and running May 14 21:16:03.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ll8rm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9965' May 14 21:16:03.939: INFO: stderr: "" May 14 21:16:03.939: INFO: stdout: "true" May 14 21:16:03.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ll8rm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9965' May 14 21:16:04.042: INFO: stderr: "" May 14 21:16:04.042: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 21:16:04.042: INFO: validating pod update-demo-nautilus-ll8rm May 14 21:16:04.062: INFO: got data: { "image": "nautilus.jpg" } May 14 21:16:04.062: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 21:16:04.062: INFO: update-demo-nautilus-ll8rm is verified up and running STEP: using delete to clean up resources May 14 21:16:04.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9965' May 14 21:16:04.197: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 21:16:04.197: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 14 21:16:04.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9965' May 14 21:16:04.294: INFO: stderr: "No resources found in kubectl-9965 namespace.\n" May 14 21:16:04.294: INFO: stdout: "" May 14 21:16:04.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9965 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 14 21:16:04.395: INFO: stderr: "" May 14 21:16:04.395: INFO: stdout: "update-demo-nautilus-cscjr\nupdate-demo-nautilus-ll8rm\n" May 14 21:16:04.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9965' May 14 21:16:05.017: INFO: stderr: "No resources found in kubectl-9965 namespace.\n" May 14 21:16:05.017: INFO: stdout: "" May 14 21:16:05.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9965 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 14 21:16:05.112: INFO: stderr: "" May 14 21:16:05.112: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:16:05.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9965" for this suite. • [SLOW TEST:7.442 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":10,"skipped":203,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:16:05.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-8999 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8999 to expose endpoints map[] May 14 21:16:05.583: INFO: successfully validated that service endpoint-test2 in namespace services-8999 exposes endpoints map[] (32.015572ms elapsed) STEP: Creating pod pod1 in namespace services-8999 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8999 to expose endpoints map[pod1:[80]] May 14 21:16:09.656: INFO: successfully validated that service endpoint-test2 in namespace services-8999 exposes endpoints map[pod1:[80]] (4.067269826s elapsed) STEP: Creating pod pod2 in namespace services-8999 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8999 to expose endpoints map[pod1:[80] pod2:[80]] May 14 21:16:14.007: INFO: successfully validated that service endpoint-test2 in namespace services-8999 exposes endpoints map[pod1:[80] pod2:[80]] (4.347194286s elapsed) STEP: Deleting pod pod1 in namespace services-8999 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8999 to expose endpoints map[pod2:[80]] May 14 21:16:15.057: INFO: successfully validated that service endpoint-test2 in namespace services-8999 exposes endpoints map[pod2:[80]] (1.046666652s elapsed) STEP: Deleting pod pod2 in namespace services-8999 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8999 to expose endpoints map[] May 14 21:16:16.229: INFO: successfully validated that service endpoint-test2 in namespace services-8999 exposes endpoints map[] (1.167745204s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:16:16.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8999" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.222 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":11,"skipped":222,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:16:16.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5138.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5138.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5138.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5138.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5138.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5138.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5138.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5138.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5138.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5138.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 21:16:22.837: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:22.840: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:22.842: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:22.845: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:22.852: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:22.855: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:22.857: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:22.860: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:22.865: INFO: Lookups using dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5138.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5138.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local jessie_udp@dns-test-service-2.dns-5138.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5138.svc.cluster.local] May 14 21:16:27.869: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:27.872: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:27.875: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:27.877: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:27.885: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:27.887: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:27.890: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:27.892: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:27.898: INFO: Lookups using dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5138.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5138.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local jessie_udp@dns-test-service-2.dns-5138.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5138.svc.cluster.local] May 14 21:16:32.870: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:32.874: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:32.876: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:32.879: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:32.887: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:32.890: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:32.892: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:32.895: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:32.900: INFO: Lookups using dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5138.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5138.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local jessie_udp@dns-test-service-2.dns-5138.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5138.svc.cluster.local] May 14 21:16:37.868: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:37.871: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:37.874: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:37.877: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:37.885: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:37.888: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:37.890: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:37.892: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:37.898: INFO: Lookups using dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5138.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5138.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local jessie_udp@dns-test-service-2.dns-5138.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5138.svc.cluster.local] May 14 21:16:42.869: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:42.872: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:42.875: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:42.878: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:42.887: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:42.919: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:42.923: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:42.927: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:42.933: INFO: Lookups using dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5138.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5138.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local jessie_udp@dns-test-service-2.dns-5138.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5138.svc.cluster.local] May 14 21:16:47.869: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:47.873: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:47.876: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:47.880: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:47.888: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:47.891: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:47.894: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:47.897: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5138.svc.cluster.local from pod dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926: the server could not find the requested resource (get pods dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926) May 14 21:16:47.903: INFO: Lookups using dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5138.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5138.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5138.svc.cluster.local jessie_udp@dns-test-service-2.dns-5138.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5138.svc.cluster.local] May 14 21:16:52.900: INFO: DNS probes using dns-5138/dns-test-b0cad5f6-dab2-4796-ac83-77bb0ee6a926 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:16:53.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5138" for this suite. • [SLOW TEST:37.213 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":12,"skipped":235,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:16:53.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8929 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-8929 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8929 May 14 21:16:53.750: INFO: Found 0 stateful pods, waiting for 1 May 14 21:17:03.754: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 14 21:17:03.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 21:17:03.981: INFO: stderr: "I0514 21:17:03.877859 340 log.go:172] (0xc000946fd0) (0xc000916640) Create stream\nI0514 21:17:03.877940 340 log.go:172] (0xc000946fd0) (0xc000916640) Stream added, broadcasting: 1\nI0514 21:17:03.881070 340 log.go:172] (0xc000946fd0) Reply frame received for 1\nI0514 21:17:03.881091 340 log.go:172] (0xc000946fd0) (0xc0006a2640) Create stream\nI0514 21:17:03.881096 340 log.go:172] (0xc000946fd0) (0xc0006a2640) Stream added, broadcasting: 3\nI0514 21:17:03.881852 340 log.go:172] (0xc000946fd0) Reply frame received for 3\nI0514 21:17:03.881900 340 log.go:172] (0xc000946fd0) (0xc00071b400) Create stream\nI0514 21:17:03.881909 340 log.go:172] (0xc000946fd0) (0xc00071b400) Stream added, broadcasting: 5\nI0514 21:17:03.882463 340 log.go:172] (0xc000946fd0) Reply frame received for 5\nI0514 21:17:03.952357 340 log.go:172] (0xc000946fd0) Data frame received for 5\nI0514 21:17:03.952384 340 log.go:172] (0xc00071b400) (5) Data frame handling\nI0514 21:17:03.952406 340 log.go:172] (0xc00071b400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 21:17:03.975184 340 log.go:172] (0xc000946fd0) Data frame received for 5\nI0514 21:17:03.975211 340 log.go:172] (0xc00071b400) (5) Data frame handling\nI0514 21:17:03.975262 340 log.go:172] (0xc000946fd0) Data frame received for 3\nI0514 21:17:03.975294 340 log.go:172] (0xc0006a2640) (3) Data frame handling\nI0514 21:17:03.975312 340 log.go:172] (0xc0006a2640) (3) Data frame sent\nI0514 21:17:03.975333 340 log.go:172] (0xc000946fd0) Data frame received for 3\nI0514 21:17:03.975345 340 log.go:172] (0xc0006a2640) (3) Data frame handling\nI0514 21:17:03.977032 340 log.go:172] (0xc000946fd0) Data frame received for 1\nI0514 21:17:03.977043 340 log.go:172] (0xc000916640) (1) Data frame handling\nI0514 21:17:03.977049 340 log.go:172] (0xc000916640) (1) Data frame sent\nI0514 21:17:03.977263 340 log.go:172] (0xc000946fd0) (0xc000916640) Stream removed, broadcasting: 1\nI0514 21:17:03.977459 340 log.go:172] (0xc000946fd0) (0xc000916640) Stream removed, broadcasting: 1\nI0514 21:17:03.977466 340 log.go:172] (0xc000946fd0) (0xc0006a2640) Stream removed, broadcasting: 3\nI0514 21:17:03.977640 340 log.go:172] (0xc000946fd0) Go away received\nI0514 21:17:03.977685 340 log.go:172] (0xc000946fd0) (0xc00071b400) Stream removed, broadcasting: 5\n" May 14 21:17:03.981: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 21:17:03.981: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 14 21:17:03.984: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 14 21:17:13.988: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 14 21:17:13.988: INFO: Waiting for statefulset status.replicas updated to 0 May 14 21:17:14.002: INFO: POD NODE PHASE GRACE CONDITIONS May 14 21:17:14.002: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:16:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:16:53 +0000 UTC }] May 14 21:17:14.002: INFO: May 14 21:17:14.002: INFO: StatefulSet ss has not reached scale 3, at 1 May 14 21:17:15.006: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993507035s May 14 21:17:16.220: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989682477s May 14 21:17:17.243: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.776199095s May 14 21:17:18.537: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.752433939s May 14 21:17:19.543: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.458325099s May 14 21:17:20.547: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.452961999s May 14 21:17:21.556: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.448425926s May 14 21:17:22.561: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.440035797s May 14 21:17:23.566: INFO: Verifying statefulset ss doesn't scale past 3 for another 434.730899ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8929 May 14 21:17:24.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:17:24.793: INFO: stderr: "I0514 21:17:24.701694 362 log.go:172] (0xc0006a8a50) (0xc000624280) Create stream\nI0514 21:17:24.701750 362 log.go:172] (0xc0006a8a50) (0xc000624280) Stream added, broadcasting: 1\nI0514 21:17:24.704662 362 log.go:172] (0xc0006a8a50) Reply frame received for 1\nI0514 21:17:24.704700 362 log.go:172] (0xc0006a8a50) (0xc000117860) Create stream\nI0514 21:17:24.704712 362 log.go:172] (0xc0006a8a50) (0xc000117860) Stream added, broadcasting: 3\nI0514 21:17:24.706037 362 log.go:172] (0xc0006a8a50) Reply frame received for 3\nI0514 21:17:24.706073 362 log.go:172] (0xc0006a8a50) (0xc0006243c0) Create stream\nI0514 21:17:24.706082 362 log.go:172] (0xc0006a8a50) (0xc0006243c0) Stream added, broadcasting: 5\nI0514 21:17:24.707153 362 log.go:172] (0xc0006a8a50) Reply frame received for 5\nI0514 21:17:24.782785 362 log.go:172] (0xc0006a8a50) Data frame received for 3\nI0514 21:17:24.782837 362 log.go:172] (0xc000117860) (3) Data frame handling\nI0514 21:17:24.782865 362 log.go:172] (0xc000117860) (3) Data frame sent\nI0514 21:17:24.782882 362 log.go:172] (0xc0006a8a50) Data frame received for 3\nI0514 21:17:24.782901 362 log.go:172] (0xc000117860) (3) Data frame handling\nI0514 21:17:24.783346 362 log.go:172] (0xc0006a8a50) Data frame received for 5\nI0514 21:17:24.783360 362 log.go:172] (0xc0006243c0) (5) Data frame handling\nI0514 21:17:24.783370 362 log.go:172] (0xc0006243c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0514 21:17:24.783376 362 log.go:172] (0xc0006a8a50) Data frame received for 5\nI0514 21:17:24.783425 362 log.go:172] (0xc0006243c0) (5) Data frame handling\nI0514 21:17:24.784619 362 log.go:172] (0xc0006a8a50) Data frame received for 1\nI0514 21:17:24.784633 362 log.go:172] (0xc000624280) (1) Data frame handling\nI0514 21:17:24.784651 362 log.go:172] (0xc000624280) (1) Data frame sent\nI0514 21:17:24.784824 362 log.go:172] (0xc0006a8a50) (0xc000624280) Stream removed, broadcasting: 1\nI0514 21:17:24.784897 362 log.go:172] (0xc0006a8a50) Go away received\nI0514 21:17:24.785527 362 log.go:172] (0xc0006a8a50) (0xc000624280) Stream removed, broadcasting: 1\nI0514 21:17:24.785549 362 log.go:172] (0xc0006a8a50) (0xc000117860) Stream removed, broadcasting: 3\nI0514 21:17:24.785558 362 log.go:172] (0xc0006a8a50) (0xc0006243c0) Stream removed, broadcasting: 5\n" May 14 21:17:24.793: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 14 21:17:24.793: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 14 21:17:24.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:17:24.964: INFO: stderr: "I0514 21:17:24.908200 383 log.go:172] (0xc0001042c0) (0xc00067f540) Create stream\nI0514 21:17:24.908256 383 log.go:172] (0xc0001042c0) (0xc00067f540) Stream added, broadcasting: 1\nI0514 21:17:24.910791 383 log.go:172] (0xc0001042c0) Reply frame received for 1\nI0514 21:17:24.910838 383 log.go:172] (0xc0001042c0) (0xc000707220) Create stream\nI0514 21:17:24.910855 383 log.go:172] (0xc0001042c0) (0xc000707220) Stream added, broadcasting: 3\nI0514 21:17:24.911693 383 log.go:172] (0xc0001042c0) Reply frame received for 3\nI0514 21:17:24.911726 383 log.go:172] (0xc0001042c0) (0xc0007072c0) Create stream\nI0514 21:17:24.911736 383 log.go:172] (0xc0001042c0) (0xc0007072c0) Stream added, broadcasting: 5\nI0514 21:17:24.912554 383 log.go:172] (0xc0001042c0) Reply frame received for 5\nI0514 21:17:24.959703 383 log.go:172] (0xc0001042c0) Data frame received for 5\nI0514 21:17:24.959734 383 log.go:172] (0xc0007072c0) (5) Data frame handling\nI0514 21:17:24.959747 383 log.go:172] (0xc0007072c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0514 21:17:24.959790 383 log.go:172] (0xc0001042c0) Data frame received for 5\nI0514 21:17:24.959807 383 log.go:172] (0xc0007072c0) (5) Data frame handling\nI0514 21:17:24.959824 383 log.go:172] (0xc0001042c0) Data frame received for 3\nI0514 21:17:24.959840 383 log.go:172] (0xc000707220) (3) Data frame handling\nI0514 21:17:24.959856 383 log.go:172] (0xc000707220) (3) Data frame sent\nI0514 21:17:24.959870 383 log.go:172] (0xc0001042c0) Data frame received for 3\nI0514 21:17:24.959879 383 log.go:172] (0xc000707220) (3) Data frame handling\nI0514 21:17:24.961056 383 log.go:172] (0xc0001042c0) Data frame received for 1\nI0514 21:17:24.961070 383 log.go:172] (0xc00067f540) (1) Data frame handling\nI0514 21:17:24.961083 383 log.go:172] (0xc00067f540) (1) Data frame sent\nI0514 21:17:24.961123 383 log.go:172] (0xc0001042c0) (0xc00067f540) Stream removed, broadcasting: 1\nI0514 21:17:24.961144 383 log.go:172] (0xc0001042c0) Go away received\nI0514 21:17:24.961496 383 log.go:172] (0xc0001042c0) (0xc00067f540) Stream removed, broadcasting: 1\nI0514 21:17:24.961510 383 log.go:172] (0xc0001042c0) (0xc000707220) Stream removed, broadcasting: 3\nI0514 21:17:24.961517 383 log.go:172] (0xc0001042c0) (0xc0007072c0) Stream removed, broadcasting: 5\n" May 14 21:17:24.965: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 14 21:17:24.965: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 14 21:17:24.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:17:25.167: INFO: stderr: "I0514 21:17:25.089775 400 log.go:172] (0xc0001042c0) (0xc0006a9b80) Create stream\nI0514 21:17:25.089825 400 log.go:172] (0xc0001042c0) (0xc0006a9b80) Stream added, broadcasting: 1\nI0514 21:17:25.092847 400 log.go:172] (0xc0001042c0) Reply frame received for 1\nI0514 21:17:25.092918 400 log.go:172] (0xc0001042c0) (0xc00064a780) Create stream\nI0514 21:17:25.092954 400 log.go:172] (0xc0001042c0) (0xc00064a780) Stream added, broadcasting: 3\nI0514 21:17:25.094958 400 log.go:172] (0xc0001042c0) Reply frame received for 3\nI0514 21:17:25.095074 400 log.go:172] (0xc0001042c0) (0xc0002e1540) Create stream\nI0514 21:17:25.095106 400 log.go:172] (0xc0001042c0) (0xc0002e1540) Stream added, broadcasting: 5\nI0514 21:17:25.097739 400 log.go:172] (0xc0001042c0) Reply frame received for 5\nI0514 21:17:25.160520 400 log.go:172] (0xc0001042c0) Data frame received for 3\nI0514 21:17:25.160548 400 log.go:172] (0xc00064a780) (3) Data frame handling\nI0514 21:17:25.160558 400 log.go:172] (0xc00064a780) (3) Data frame sent\nI0514 21:17:25.160564 400 log.go:172] (0xc0001042c0) Data frame received for 3\nI0514 21:17:25.160572 400 log.go:172] (0xc00064a780) (3) Data frame handling\nI0514 21:17:25.160601 400 log.go:172] (0xc0001042c0) Data frame received for 5\nI0514 21:17:25.160607 400 log.go:172] (0xc0002e1540) (5) Data frame handling\nI0514 21:17:25.160615 400 log.go:172] (0xc0002e1540) (5) Data frame sent\nI0514 21:17:25.160620 400 log.go:172] (0xc0001042c0) Data frame received for 5\nI0514 21:17:25.160625 400 log.go:172] (0xc0002e1540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0514 21:17:25.162302 400 log.go:172] (0xc0001042c0) Data frame received for 1\nI0514 21:17:25.162357 400 log.go:172] (0xc0006a9b80) (1) Data frame handling\nI0514 21:17:25.162371 400 log.go:172] (0xc0006a9b80) (1) Data frame sent\nI0514 21:17:25.162388 400 log.go:172] (0xc0001042c0) (0xc0006a9b80) Stream removed, broadcasting: 1\nI0514 21:17:25.162414 400 log.go:172] (0xc0001042c0) Go away received\nI0514 21:17:25.162836 400 log.go:172] (0xc0001042c0) (0xc0006a9b80) Stream removed, broadcasting: 1\nI0514 21:17:25.162859 400 log.go:172] (0xc0001042c0) (0xc00064a780) Stream removed, broadcasting: 3\nI0514 21:17:25.162874 400 log.go:172] (0xc0001042c0) (0xc0002e1540) Stream removed, broadcasting: 5\n" May 14 21:17:25.168: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 14 21:17:25.168: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 14 21:17:25.194: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 14 21:17:25.194: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 14 21:17:25.194: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 14 21:17:25.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 21:17:25.395: INFO: stderr: "I0514 21:17:25.328182 420 log.go:172] (0xc000a20a50) (0xc000bae460) Create stream\nI0514 21:17:25.328227 420 log.go:172] (0xc000a20a50) (0xc000bae460) Stream added, broadcasting: 1\nI0514 21:17:25.332210 420 log.go:172] (0xc000a20a50) Reply frame received for 1\nI0514 21:17:25.332265 420 log.go:172] (0xc000a20a50) (0xc00059a640) Create stream\nI0514 21:17:25.332285 420 log.go:172] (0xc000a20a50) (0xc00059a640) Stream added, broadcasting: 3\nI0514 21:17:25.333097 420 log.go:172] (0xc000a20a50) Reply frame received for 3\nI0514 21:17:25.333288 420 log.go:172] (0xc000a20a50) (0xc00002f400) Create stream\nI0514 21:17:25.333307 420 log.go:172] (0xc000a20a50) (0xc00002f400) Stream added, broadcasting: 5\nI0514 21:17:25.334148 420 log.go:172] (0xc000a20a50) Reply frame received for 5\nI0514 21:17:25.388142 420 log.go:172] (0xc000a20a50) Data frame received for 3\nI0514 21:17:25.388193 420 log.go:172] (0xc00059a640) (3) Data frame handling\nI0514 21:17:25.388215 420 log.go:172] (0xc00059a640) (3) Data frame sent\nI0514 21:17:25.388240 420 log.go:172] (0xc000a20a50) Data frame received for 3\nI0514 21:17:25.388254 420 log.go:172] (0xc00059a640) (3) Data frame handling\nI0514 21:17:25.388293 420 log.go:172] (0xc000a20a50) Data frame received for 5\nI0514 21:17:25.388322 420 log.go:172] (0xc00002f400) (5) Data frame handling\nI0514 21:17:25.388348 420 log.go:172] (0xc00002f400) (5) Data frame sent\nI0514 21:17:25.388364 420 log.go:172] (0xc000a20a50) Data frame received for 5\nI0514 21:17:25.388372 420 log.go:172] (0xc00002f400) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 21:17:25.389950 420 log.go:172] (0xc000a20a50) Data frame received for 1\nI0514 21:17:25.389971 420 log.go:172] (0xc000bae460) (1) Data frame handling\nI0514 21:17:25.389983 420 log.go:172] (0xc000bae460) (1) Data frame sent\nI0514 21:17:25.390002 420 log.go:172] (0xc000a20a50) (0xc000bae460) Stream removed, broadcasting: 1\nI0514 21:17:25.390020 420 log.go:172] (0xc000a20a50) Go away received\nI0514 21:17:25.390382 420 log.go:172] (0xc000a20a50) (0xc000bae460) Stream removed, broadcasting: 1\nI0514 21:17:25.390400 420 log.go:172] (0xc000a20a50) (0xc00059a640) Stream removed, broadcasting: 3\nI0514 21:17:25.390409 420 log.go:172] (0xc000a20a50) (0xc00002f400) Stream removed, broadcasting: 5\n" May 14 21:17:25.395: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 21:17:25.395: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 14 21:17:25.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 21:17:25.625: INFO: stderr: "I0514 21:17:25.531016 440 log.go:172] (0xc000b98000) (0xc00079c000) Create stream\nI0514 21:17:25.531076 440 log.go:172] (0xc000b98000) (0xc00079c000) Stream added, broadcasting: 1\nI0514 21:17:25.534075 440 log.go:172] (0xc000b98000) Reply frame received for 1\nI0514 21:17:25.534275 440 log.go:172] (0xc000b98000) (0xc0006f01e0) Create stream\nI0514 21:17:25.534323 440 log.go:172] (0xc000b98000) (0xc0006f01e0) Stream added, broadcasting: 3\nI0514 21:17:25.535792 440 log.go:172] (0xc000b98000) Reply frame received for 3\nI0514 21:17:25.535825 440 log.go:172] (0xc000b98000) (0xc0006f0280) Create stream\nI0514 21:17:25.535835 440 log.go:172] (0xc000b98000) (0xc0006f0280) Stream added, broadcasting: 5\nI0514 21:17:25.536732 440 log.go:172] (0xc000b98000) Reply frame received for 5\nI0514 21:17:25.590595 440 log.go:172] (0xc000b98000) Data frame received for 5\nI0514 21:17:25.590622 440 log.go:172] (0xc0006f0280) (5) Data frame handling\nI0514 21:17:25.590640 440 log.go:172] (0xc0006f0280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 21:17:25.615388 440 log.go:172] (0xc000b98000) Data frame received for 3\nI0514 21:17:25.615429 440 log.go:172] (0xc0006f01e0) (3) Data frame handling\nI0514 21:17:25.615444 440 log.go:172] (0xc0006f01e0) (3) Data frame sent\nI0514 21:17:25.615457 440 log.go:172] (0xc000b98000) Data frame received for 3\nI0514 21:17:25.615466 440 log.go:172] (0xc0006f01e0) (3) Data frame handling\nI0514 21:17:25.615499 440 log.go:172] (0xc000b98000) Data frame received for 5\nI0514 21:17:25.615513 440 log.go:172] (0xc0006f0280) (5) Data frame handling\nI0514 21:17:25.618662 440 log.go:172] (0xc000b98000) Data frame received for 1\nI0514 21:17:25.618686 440 log.go:172] (0xc00079c000) (1) Data frame handling\nI0514 21:17:25.618718 440 log.go:172] (0xc00079c000) (1) Data frame sent\nI0514 21:17:25.618747 440 log.go:172] (0xc000b98000) (0xc00079c000) Stream removed, broadcasting: 1\nI0514 21:17:25.618770 440 log.go:172] (0xc000b98000) Go away received\nI0514 21:17:25.619262 440 log.go:172] (0xc000b98000) (0xc00079c000) Stream removed, broadcasting: 1\nI0514 21:17:25.619298 440 log.go:172] (0xc000b98000) (0xc0006f01e0) Stream removed, broadcasting: 3\nI0514 21:17:25.619310 440 log.go:172] (0xc000b98000) (0xc0006f0280) Stream removed, broadcasting: 5\n" May 14 21:17:25.625: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 21:17:25.625: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 14 21:17:25.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 21:17:25.885: INFO: stderr: "I0514 21:17:25.774793 460 log.go:172] (0xc000a960b0) (0xc000651b80) Create stream\nI0514 21:17:25.774883 460 log.go:172] (0xc000a960b0) (0xc000651b80) Stream added, broadcasting: 1\nI0514 21:17:25.777352 460 log.go:172] (0xc000a960b0) Reply frame received for 1\nI0514 21:17:25.777414 460 log.go:172] (0xc000a960b0) (0xc000bd2000) Create stream\nI0514 21:17:25.777438 460 log.go:172] (0xc000a960b0) (0xc000bd2000) Stream added, broadcasting: 3\nI0514 21:17:25.778623 460 log.go:172] (0xc000a960b0) Reply frame received for 3\nI0514 21:17:25.778674 460 log.go:172] (0xc000a960b0) (0xc000028000) Create stream\nI0514 21:17:25.778693 460 log.go:172] (0xc000a960b0) (0xc000028000) Stream added, broadcasting: 5\nI0514 21:17:25.779493 460 log.go:172] (0xc000a960b0) Reply frame received for 5\nI0514 21:17:25.852695 460 log.go:172] (0xc000a960b0) Data frame received for 5\nI0514 21:17:25.852715 460 log.go:172] (0xc000028000) (5) Data frame handling\nI0514 21:17:25.852732 460 log.go:172] (0xc000028000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 21:17:25.877688 460 log.go:172] (0xc000a960b0) Data frame received for 5\nI0514 21:17:25.877726 460 log.go:172] (0xc000028000) (5) Data frame handling\nI0514 21:17:25.877750 460 log.go:172] (0xc000a960b0) Data frame received for 3\nI0514 21:17:25.877759 460 log.go:172] (0xc000bd2000) (3) Data frame handling\nI0514 21:17:25.877770 460 log.go:172] (0xc000bd2000) (3) Data frame sent\nI0514 21:17:25.877779 460 log.go:172] (0xc000a960b0) Data frame received for 3\nI0514 21:17:25.877803 460 log.go:172] (0xc000bd2000) (3) Data frame handling\nI0514 21:17:25.879365 460 log.go:172] (0xc000a960b0) Data frame received for 1\nI0514 21:17:25.879388 460 log.go:172] (0xc000651b80) (1) Data frame handling\nI0514 21:17:25.879415 460 log.go:172] (0xc000651b80) (1) Data frame sent\nI0514 21:17:25.879430 460 log.go:172] (0xc000a960b0) (0xc000651b80) Stream removed, broadcasting: 1\nI0514 21:17:25.879451 460 log.go:172] (0xc000a960b0) Go away received\nI0514 21:17:25.880043 460 log.go:172] (0xc000a960b0) (0xc000651b80) Stream removed, broadcasting: 1\nI0514 21:17:25.880070 460 log.go:172] (0xc000a960b0) (0xc000bd2000) Stream removed, broadcasting: 3\nI0514 21:17:25.880083 460 log.go:172] (0xc000a960b0) (0xc000028000) Stream removed, broadcasting: 5\n" May 14 21:17:25.885: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 21:17:25.885: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 14 21:17:25.885: INFO: Waiting for statefulset status.replicas updated to 0 May 14 21:17:25.888: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 14 21:17:35.918: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 14 21:17:35.918: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 14 21:17:35.918: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 14 21:17:35.931: INFO: POD NODE PHASE GRACE CONDITIONS May 14 21:17:35.931: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:16:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:16:53 +0000 UTC }] May 14 21:17:35.931: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:35.931: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:35.931: INFO: May 14 21:17:35.931: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 21:17:37.088: INFO: POD NODE PHASE GRACE CONDITIONS May 14 21:17:37.088: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:16:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:16:53 +0000 UTC }] May 14 21:17:37.088: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:37.088: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:37.088: INFO: May 14 21:17:37.088: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 21:17:38.093: INFO: POD NODE PHASE GRACE CONDITIONS May 14 21:17:38.093: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:16:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:16:53 +0000 UTC }] May 14 21:17:38.093: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:38.093: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:38.093: INFO: May 14 21:17:38.093: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 21:17:39.097: INFO: POD NODE PHASE GRACE CONDITIONS May 14 21:17:39.097: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:16:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:16:53 +0000 UTC }] May 14 21:17:39.097: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:39.097: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:39.097: INFO: May 14 21:17:39.097: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 21:17:40.102: INFO: POD NODE PHASE GRACE CONDITIONS May 14 21:17:40.102: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:40.102: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:40.102: INFO: May 14 21:17:40.102: INFO: StatefulSet ss has not reached scale 0, at 2 May 14 21:17:41.106: INFO: POD NODE PHASE GRACE CONDITIONS May 14 21:17:41.106: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:41.106: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:41.106: INFO: May 14 21:17:41.106: INFO: StatefulSet ss has not reached scale 0, at 2 May 14 21:17:42.110: INFO: POD NODE PHASE GRACE CONDITIONS May 14 21:17:42.110: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:42.110: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:42.110: INFO: May 14 21:17:42.110: INFO: StatefulSet ss has not reached scale 0, at 2 May 14 21:17:43.115: INFO: POD NODE PHASE GRACE CONDITIONS May 14 21:17:43.115: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:43.115: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:43.115: INFO: May 14 21:17:43.115: INFO: StatefulSet ss has not reached scale 0, at 2 May 14 21:17:44.121: INFO: POD NODE PHASE GRACE CONDITIONS May 14 21:17:44.122: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:44.122: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:44.122: INFO: May 14 21:17:44.122: INFO: StatefulSet ss has not reached scale 0, at 2 May 14 21:17:45.126: INFO: POD NODE PHASE GRACE CONDITIONS May 14 21:17:45.126: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:45.126: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 21:17:14 +0000 UTC }] May 14 21:17:45.126: INFO: May 14 21:17:45.126: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8929 May 14 21:17:46.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:17:46.276: INFO: rc: 1 May 14 21:17:46.276: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 14 21:17:56.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:17:56.384: INFO: rc: 1 May 14 21:17:56.384: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:18:06.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:18:06.467: INFO: rc: 1 May 14 21:18:06.467: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:18:16.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:18:16.570: INFO: rc: 1 May 14 21:18:16.570: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:18:26.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:18:26.666: INFO: rc: 1 May 14 21:18:26.666: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:18:36.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:18:36.765: INFO: rc: 1 May 14 21:18:36.765: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:18:46.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:18:46.856: INFO: rc: 1 May 14 21:18:46.856: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:18:56.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:18:56.956: INFO: rc: 1 May 14 21:18:56.956: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:19:06.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:19:07.058: INFO: rc: 1 May 14 21:19:07.058: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:19:17.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:19:17.163: INFO: rc: 1 May 14 21:19:17.163: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:19:27.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:19:27.259: INFO: rc: 1 May 14 21:19:27.259: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:19:37.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:19:37.361: INFO: rc: 1 May 14 21:19:37.361: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:19:47.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:19:47.467: INFO: rc: 1 May 14 21:19:47.467: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:19:57.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:19:57.568: INFO: rc: 1 May 14 21:19:57.568: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:20:07.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:20:07.658: INFO: rc: 1 May 14 21:20:07.658: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:20:17.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:20:17.755: INFO: rc: 1 May 14 21:20:17.755: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:20:27.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:20:27.854: INFO: rc: 1 May 14 21:20:27.854: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:20:37.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:20:37.958: INFO: rc: 1 May 14 21:20:37.958: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:20:47.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:20:48.056: INFO: rc: 1 May 14 21:20:48.056: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:20:58.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:20:58.164: INFO: rc: 1 May 14 21:20:58.164: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:21:08.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:21:08.257: INFO: rc: 1 May 14 21:21:08.257: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:21:18.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:21:18.347: INFO: rc: 1 May 14 21:21:18.347: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:21:28.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:21:28.449: INFO: rc: 1 May 14 21:21:28.449: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:21:38.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:21:38.548: INFO: rc: 1 May 14 21:21:38.548: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:21:48.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:21:48.651: INFO: rc: 1 May 14 21:21:48.651: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:21:58.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:21:58.745: INFO: rc: 1 May 14 21:21:58.745: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:22:08.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:22:08.842: INFO: rc: 1 May 14 21:22:08.842: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:22:18.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:22:18.936: INFO: rc: 1 May 14 21:22:18.936: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:22:28.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:22:29.033: INFO: rc: 1 May 14 21:22:29.033: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:22:39.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:22:39.131: INFO: rc: 1 May 14 21:22:39.132: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 14 21:22:49.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8929 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:22:49.236: INFO: rc: 1 May 14 21:22:49.236: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: May 14 21:22:49.236: INFO: Scaling statefulset ss to 0 May 14 21:22:49.244: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 14 21:22:49.246: INFO: Deleting all statefulset in ns statefulset-8929 May 14 21:22:49.248: INFO: Scaling statefulset ss to 0 May 14 21:22:49.255: INFO: Waiting for statefulset status.replicas updated to 0 May 14 21:22:49.258: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:22:49.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8929" for this suite. • [SLOW TEST:355.724 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":13,"skipped":236,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:22:49.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:22:49.396: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 14 21:22:49.403: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:22:49.419: INFO: Number of nodes with available pods: 0 May 14 21:22:49.419: INFO: Node jerma-worker is running more than one daemon pod May 14 21:22:50.434: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:22:50.436: INFO: Number of nodes with available pods: 0 May 14 21:22:50.436: INFO: Node jerma-worker is running more than one daemon pod May 14 21:22:51.424: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:22:51.428: INFO: Number of nodes with available pods: 0 May 14 21:22:51.428: INFO: Node jerma-worker is running more than one daemon pod May 14 21:22:52.542: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:22:52.545: INFO: Number of nodes with available pods: 0 May 14 21:22:52.545: INFO: Node jerma-worker is running more than one daemon pod May 14 21:22:53.424: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:22:53.427: INFO: Number of nodes with available pods: 0 May 14 21:22:53.427: INFO: Node jerma-worker is running more than one daemon pod May 14 21:22:54.439: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:22:54.442: INFO: Number of nodes with available pods: 2 May 14 21:22:54.442: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 14 21:22:54.480: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:22:54.480: INFO: Wrong image for pod: daemon-set-rf2zl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:22:54.531: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:22:55.535: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:22:55.535: INFO: Wrong image for pod: daemon-set-rf2zl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:22:55.539: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:22:56.534: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:22:56.534: INFO: Wrong image for pod: daemon-set-rf2zl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:22:56.537: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:22:57.535: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:22:57.535: INFO: Wrong image for pod: daemon-set-rf2zl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:22:57.538: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:22:58.536: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:22:58.536: INFO: Wrong image for pod: daemon-set-rf2zl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:22:58.536: INFO: Pod daemon-set-rf2zl is not available May 14 21:22:58.540: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:22:59.536: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:22:59.536: INFO: Wrong image for pod: daemon-set-rf2zl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:22:59.536: INFO: Pod daemon-set-rf2zl is not available May 14 21:22:59.540: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:00.536: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:00.536: INFO: Wrong image for pod: daemon-set-rf2zl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:00.536: INFO: Pod daemon-set-rf2zl is not available May 14 21:23:00.541: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:01.535: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:01.535: INFO: Wrong image for pod: daemon-set-rf2zl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:01.535: INFO: Pod daemon-set-rf2zl is not available May 14 21:23:01.540: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:02.535: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:02.536: INFO: Wrong image for pod: daemon-set-rf2zl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:02.536: INFO: Pod daemon-set-rf2zl is not available May 14 21:23:02.539: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:03.536: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:03.536: INFO: Wrong image for pod: daemon-set-rf2zl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:03.536: INFO: Pod daemon-set-rf2zl is not available May 14 21:23:03.540: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:04.547: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:04.547: INFO: Wrong image for pod: daemon-set-rf2zl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:04.547: INFO: Pod daemon-set-rf2zl is not available May 14 21:23:04.551: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:05.536: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:05.536: INFO: Wrong image for pod: daemon-set-rf2zl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:05.536: INFO: Pod daemon-set-rf2zl is not available May 14 21:23:05.540: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:06.535: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:06.535: INFO: Wrong image for pod: daemon-set-rf2zl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:06.535: INFO: Pod daemon-set-rf2zl is not available May 14 21:23:06.539: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:07.535: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:07.535: INFO: Wrong image for pod: daemon-set-rf2zl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:07.535: INFO: Pod daemon-set-rf2zl is not available May 14 21:23:07.539: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:08.535: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:08.535: INFO: Wrong image for pod: daemon-set-rf2zl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:08.535: INFO: Pod daemon-set-rf2zl is not available May 14 21:23:08.538: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:09.547: INFO: Pod daemon-set-9jzss is not available May 14 21:23:09.547: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:09.551: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:10.534: INFO: Pod daemon-set-9jzss is not available May 14 21:23:10.535: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:10.538: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:11.536: INFO: Pod daemon-set-9jzss is not available May 14 21:23:11.536: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:11.539: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:12.535: INFO: Pod daemon-set-9jzss is not available May 14 21:23:12.535: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:12.540: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:13.535: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:13.539: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:14.535: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:14.535: INFO: Pod daemon-set-bwbl5 is not available May 14 21:23:14.540: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:15.534: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:15.534: INFO: Pod daemon-set-bwbl5 is not available May 14 21:23:15.553: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:16.535: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:16.535: INFO: Pod daemon-set-bwbl5 is not available May 14 21:23:16.538: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:17.535: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:17.536: INFO: Pod daemon-set-bwbl5 is not available May 14 21:23:17.540: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:18.535: INFO: Wrong image for pod: daemon-set-bwbl5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 14 21:23:18.535: INFO: Pod daemon-set-bwbl5 is not available May 14 21:23:18.539: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:19.543: INFO: Pod daemon-set-8l9lb is not available May 14 21:23:19.559: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 14 21:23:19.562: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:19.564: INFO: Number of nodes with available pods: 1 May 14 21:23:19.564: INFO: Node jerma-worker2 is running more than one daemon pod May 14 21:23:20.568: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:20.572: INFO: Number of nodes with available pods: 1 May 14 21:23:20.572: INFO: Node jerma-worker2 is running more than one daemon pod May 14 21:23:21.570: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:21.573: INFO: Number of nodes with available pods: 1 May 14 21:23:21.573: INFO: Node jerma-worker2 is running more than one daemon pod May 14 21:23:22.568: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:23:22.572: INFO: Number of nodes with available pods: 2 May 14 21:23:22.572: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1832, will wait for the garbage collector to delete the pods May 14 21:23:22.645: INFO: Deleting DaemonSet.extensions daemon-set took: 5.925998ms May 14 21:23:22.945: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.248867ms May 14 21:23:29.549: INFO: Number of nodes with available pods: 0 May 14 21:23:29.549: INFO: Number of running nodes: 0, number of available pods: 0 May 14 21:23:29.552: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1832/daemonsets","resourceVersion":"16201719"},"items":null} May 14 21:23:29.556: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1832/pods","resourceVersion":"16201719"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:23:29.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1832" for this suite. • [SLOW TEST:40.294 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":14,"skipped":249,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:23:29.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 14 21:23:29.676: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75893213-2f2c-457d-bd9c-cf581eef0812" in namespace "downward-api-2880" to be "success or failure" May 14 21:23:29.684: INFO: Pod "downwardapi-volume-75893213-2f2c-457d-bd9c-cf581eef0812": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013463ms May 14 21:23:31.688: INFO: Pod "downwardapi-volume-75893213-2f2c-457d-bd9c-cf581eef0812": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012200279s May 14 21:23:33.692: INFO: Pod "downwardapi-volume-75893213-2f2c-457d-bd9c-cf581eef0812": Phase="Running", Reason="", readiness=true. Elapsed: 4.015825046s May 14 21:23:35.696: INFO: Pod "downwardapi-volume-75893213-2f2c-457d-bd9c-cf581eef0812": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020100629s STEP: Saw pod success May 14 21:23:35.696: INFO: Pod "downwardapi-volume-75893213-2f2c-457d-bd9c-cf581eef0812" satisfied condition "success or failure" May 14 21:23:35.700: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-75893213-2f2c-457d-bd9c-cf581eef0812 container client-container: STEP: delete the pod May 14 21:23:35.735: INFO: Waiting for pod downwardapi-volume-75893213-2f2c-457d-bd9c-cf581eef0812 to disappear May 14 21:23:35.804: INFO: Pod downwardapi-volume-75893213-2f2c-457d-bd9c-cf581eef0812 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:23:35.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2880" for this suite. • [SLOW TEST:6.240 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":251,"failed":0} [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:23:35.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:24:08.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5394" for this suite. • [SLOW TEST:32.235 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":251,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:24:08.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 14 21:24:08.144: INFO: Waiting up to 5m0s for pod "downward-api-c54e33c6-2ad5-4944-add4-cb8c7efa14e3" in namespace "downward-api-8540" to be "success or failure" May 14 21:24:08.205: INFO: Pod "downward-api-c54e33c6-2ad5-4944-add4-cb8c7efa14e3": Phase="Pending", Reason="", readiness=false. Elapsed: 60.635524ms May 14 21:24:10.233: INFO: Pod "downward-api-c54e33c6-2ad5-4944-add4-cb8c7efa14e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088569954s May 14 21:24:12.260: INFO: Pod "downward-api-c54e33c6-2ad5-4944-add4-cb8c7efa14e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116088126s STEP: Saw pod success May 14 21:24:12.260: INFO: Pod "downward-api-c54e33c6-2ad5-4944-add4-cb8c7efa14e3" satisfied condition "success or failure" May 14 21:24:12.274: INFO: Trying to get logs from node jerma-worker pod downward-api-c54e33c6-2ad5-4944-add4-cb8c7efa14e3 container dapi-container: STEP: delete the pod May 14 21:24:12.293: INFO: Waiting for pod downward-api-c54e33c6-2ad5-4944-add4-cb8c7efa14e3 to disappear May 14 21:24:12.304: INFO: Pod downward-api-c54e33c6-2ad5-4944-add4-cb8c7efa14e3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:24:12.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8540" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":297,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:24:12.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 14 21:24:17.216: INFO: Successfully updated pod "annotationupdate5a247498-1a09-428e-9192-3c009e224a5a" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:24:21.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6483" for this suite. • [SLOW TEST:8.941 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":304,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:24:21.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 14 21:24:28.148: INFO: 10 pods remaining May 14 21:24:28.148: INFO: 10 pods has nil DeletionTimestamp May 14 21:24:28.148: INFO: May 14 21:24:30.454: INFO: 7 pods remaining May 14 21:24:30.454: INFO: 0 pods has nil DeletionTimestamp May 14 21:24:30.454: INFO: May 14 21:24:32.185: INFO: 0 pods remaining May 14 21:24:32.186: INFO: 0 pods has nil DeletionTimestamp May 14 21:24:32.186: INFO: STEP: Gathering metrics W0514 21:24:33.505394 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 21:24:33.505: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:24:33.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8109" for this suite. • [SLOW TEST:12.875 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":19,"skipped":309,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:24:34.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-2201/configmap-test-54ab8586-b33d-41b4-bbbd-e6ac7e0e93fa STEP: Creating a pod to test consume configMaps May 14 21:24:34.848: INFO: Waiting up to 5m0s for pod "pod-configmaps-8be63da7-838f-42b5-9847-b4d0bf37d0c5" in namespace "configmap-2201" to be "success or failure" May 14 21:24:34.851: INFO: Pod "pod-configmaps-8be63da7-838f-42b5-9847-b4d0bf37d0c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.965529ms May 14 21:24:36.857: INFO: Pod "pod-configmaps-8be63da7-838f-42b5-9847-b4d0bf37d0c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009626596s May 14 21:24:38.862: INFO: Pod "pod-configmaps-8be63da7-838f-42b5-9847-b4d0bf37d0c5": Phase="Running", Reason="", readiness=true. Elapsed: 4.01420612s May 14 21:24:40.866: INFO: Pod "pod-configmaps-8be63da7-838f-42b5-9847-b4d0bf37d0c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018228999s STEP: Saw pod success May 14 21:24:40.866: INFO: Pod "pod-configmaps-8be63da7-838f-42b5-9847-b4d0bf37d0c5" satisfied condition "success or failure" May 14 21:24:40.869: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-8be63da7-838f-42b5-9847-b4d0bf37d0c5 container env-test: STEP: delete the pod May 14 21:24:40.902: INFO: Waiting for pod pod-configmaps-8be63da7-838f-42b5-9847-b4d0bf37d0c5 to disappear May 14 21:24:40.905: INFO: Pod pod-configmaps-8be63da7-838f-42b5-9847-b4d0bf37d0c5 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:24:40.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2201" for this suite. • [SLOW TEST:6.787 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":321,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:24:40.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 14 21:24:47.057: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-298 PodName:pod-sharedvolume-3a37d068-b13b-4589-ba20-3f5646d83470 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 21:24:47.058: INFO: >>> kubeConfig: /root/.kube/config I0514 21:24:47.115734 6 log.go:172] (0xc001df86e0) (0xc0029a0dc0) Create stream I0514 21:24:47.115767 6 log.go:172] (0xc001df86e0) (0xc0029a0dc0) Stream added, broadcasting: 1 I0514 21:24:47.117811 6 log.go:172] (0xc001df86e0) Reply frame received for 1 I0514 21:24:47.117840 6 log.go:172] (0xc001df86e0) (0xc0028483c0) Create stream I0514 21:24:47.117857 6 log.go:172] (0xc001df86e0) (0xc0028483c0) Stream added, broadcasting: 3 I0514 21:24:47.118487 6 log.go:172] (0xc001df86e0) Reply frame received for 3 I0514 21:24:47.118506 6 log.go:172] (0xc001df86e0) (0xc002738000) Create stream I0514 21:24:47.118513 6 log.go:172] (0xc001df86e0) (0xc002738000) Stream added, broadcasting: 5 I0514 21:24:47.119444 6 log.go:172] (0xc001df86e0) Reply frame received for 5 I0514 21:24:47.193415 6 log.go:172] (0xc001df86e0) Data frame received for 3 I0514 21:24:47.193435 6 log.go:172] (0xc0028483c0) (3) Data frame handling I0514 21:24:47.193452 6 log.go:172] (0xc0028483c0) (3) Data frame sent I0514 21:24:47.193476 6 log.go:172] (0xc001df86e0) Data frame received for 5 I0514 21:24:47.193500 6 log.go:172] (0xc002738000) (5) Data frame handling I0514 21:24:47.193920 6 log.go:172] (0xc001df86e0) Data frame received for 3 I0514 21:24:47.193950 6 log.go:172] (0xc0028483c0) (3) Data frame handling I0514 21:24:47.195029 6 log.go:172] (0xc001df86e0) Data frame received for 1 I0514 21:24:47.195047 6 log.go:172] (0xc0029a0dc0) (1) Data frame handling I0514 21:24:47.195060 6 log.go:172] (0xc0029a0dc0) (1) Data frame sent I0514 21:24:47.195070 6 log.go:172] (0xc001df86e0) (0xc0029a0dc0) Stream removed, broadcasting: 1 I0514 21:24:47.195195 6 log.go:172] (0xc001df86e0) Go away received I0514 21:24:47.195294 6 log.go:172] (0xc001df86e0) (0xc0029a0dc0) Stream removed, broadcasting: 1 I0514 21:24:47.195305 6 log.go:172] (0xc001df86e0) (0xc0028483c0) Stream removed, broadcasting: 3 I0514 21:24:47.195314 6 log.go:172] (0xc001df86e0) (0xc002738000) Stream removed, broadcasting: 5 May 14 21:24:47.195: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:24:47.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-298" for this suite. • [SLOW TEST:6.289 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":21,"skipped":323,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:24:47.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 14 21:24:47.404: INFO: >>> kubeConfig: /root/.kube/config May 14 21:24:49.769: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:25:00.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6984" for this suite. • [SLOW TEST:13.177 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":22,"skipped":330,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:25:00.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:25:00.619: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 14 21:25:03.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3169 create -f -' May 14 21:25:07.064: INFO: stderr: "" May 14 21:25:07.064: INFO: stdout: "e2e-test-crd-publish-openapi-9193-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 14 21:25:07.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3169 delete e2e-test-crd-publish-openapi-9193-crds test-cr' May 14 21:25:07.169: INFO: stderr: "" May 14 21:25:07.169: INFO: stdout: "e2e-test-crd-publish-openapi-9193-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 14 21:25:07.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3169 apply -f -' May 14 21:25:07.448: INFO: stderr: "" May 14 21:25:07.448: INFO: stdout: "e2e-test-crd-publish-openapi-9193-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 14 21:25:07.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3169 delete e2e-test-crd-publish-openapi-9193-crds test-cr' May 14 21:25:07.542: INFO: stderr: "" May 14 21:25:07.542: INFO: stdout: "e2e-test-crd-publish-openapi-9193-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 14 21:25:07.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9193-crds' May 14 21:25:07.774: INFO: stderr: "" May 14 21:25:07.774: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9193-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:25:09.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3169" for this suite. • [SLOW TEST:9.329 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":23,"skipped":349,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:25:09.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:25:26.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9653" for this suite. • [SLOW TEST:16.481 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":24,"skipped":380,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:25:26.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 14 21:25:26.245: INFO: >>> kubeConfig: /root/.kube/config May 14 21:25:28.244: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:25:38.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-160" for this suite. • [SLOW TEST:12.570 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":25,"skipped":380,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:25:38.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-5294 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5294 STEP: Deleting pre-stop pod May 14 21:25:52.011: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:25:52.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5294" for this suite. • [SLOW TEST:13.342 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":26,"skipped":385,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:25:52.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:25:56.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4731" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":27,"skipped":415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:25:57.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4941 STEP: creating a selector STEP: Creating the service pods in kubernetes May 14 21:25:57.127: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 14 21:26:25.310: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.113:8080/dial?request=hostname&protocol=http&host=10.244.1.24&port=8080&tries=1'] Namespace:pod-network-test-4941 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 21:26:25.310: INFO: >>> kubeConfig: /root/.kube/config I0514 21:26:25.344067 6 log.go:172] (0xc0040664d0) (0xc001dfbea0) Create stream I0514 21:26:25.344117 6 log.go:172] (0xc0040664d0) (0xc001dfbea0) Stream added, broadcasting: 1 I0514 21:26:25.346318 6 log.go:172] (0xc0040664d0) Reply frame received for 1 I0514 21:26:25.346349 6 log.go:172] (0xc0040664d0) (0xc0027afcc0) Create stream I0514 21:26:25.346357 6 log.go:172] (0xc0040664d0) (0xc0027afcc0) Stream added, broadcasting: 3 I0514 21:26:25.347182 6 log.go:172] (0xc0040664d0) Reply frame received for 3 I0514 21:26:25.347203 6 log.go:172] (0xc0040664d0) (0xc00224fc20) Create stream I0514 21:26:25.347209 6 log.go:172] (0xc0040664d0) (0xc00224fc20) Stream added, broadcasting: 5 I0514 21:26:25.347919 6 log.go:172] (0xc0040664d0) Reply frame received for 5 I0514 21:26:25.551578 6 log.go:172] (0xc0040664d0) Data frame received for 3 I0514 21:26:25.551617 6 log.go:172] (0xc0027afcc0) (3) Data frame handling I0514 21:26:25.551665 6 log.go:172] (0xc0027afcc0) (3) Data frame sent I0514 21:26:25.552309 6 log.go:172] (0xc0040664d0) Data frame received for 3 I0514 21:26:25.552404 6 log.go:172] (0xc0027afcc0) (3) Data frame handling I0514 21:26:25.552442 6 log.go:172] (0xc0040664d0) Data frame received for 5 I0514 21:26:25.552509 6 log.go:172] (0xc00224fc20) (5) Data frame handling I0514 21:26:25.554180 6 log.go:172] (0xc0040664d0) Data frame received for 1 I0514 21:26:25.554207 6 log.go:172] (0xc001dfbea0) (1) Data frame handling I0514 21:26:25.554243 6 log.go:172] (0xc001dfbea0) (1) Data frame sent I0514 21:26:25.554267 6 log.go:172] (0xc0040664d0) (0xc001dfbea0) Stream removed, broadcasting: 1 I0514 21:26:25.554301 6 log.go:172] (0xc0040664d0) Go away received I0514 21:26:25.554427 6 log.go:172] (0xc0040664d0) (0xc001dfbea0) Stream removed, broadcasting: 1 I0514 21:26:25.554446 6 log.go:172] (0xc0040664d0) (0xc0027afcc0) Stream removed, broadcasting: 3 I0514 21:26:25.554459 6 log.go:172] (0xc0040664d0) (0xc00224fc20) Stream removed, broadcasting: 5 May 14 21:26:25.554: INFO: Waiting for responses: map[] May 14 21:26:25.567: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.113:8080/dial?request=hostname&protocol=http&host=10.244.2.112&port=8080&tries=1'] Namespace:pod-network-test-4941 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 21:26:25.567: INFO: >>> kubeConfig: /root/.kube/config I0514 21:26:25.597895 6 log.go:172] (0xc0045d6420) (0xc001d14140) Create stream I0514 21:26:25.597918 6 log.go:172] (0xc0045d6420) (0xc001d14140) Stream added, broadcasting: 1 I0514 21:26:25.599642 6 log.go:172] (0xc0045d6420) Reply frame received for 1 I0514 21:26:25.599668 6 log.go:172] (0xc0045d6420) (0xc0022b6e60) Create stream I0514 21:26:25.599677 6 log.go:172] (0xc0045d6420) (0xc0022b6e60) Stream added, broadcasting: 3 I0514 21:26:25.600478 6 log.go:172] (0xc0045d6420) Reply frame received for 3 I0514 21:26:25.600506 6 log.go:172] (0xc0045d6420) (0xc001d14280) Create stream I0514 21:26:25.600516 6 log.go:172] (0xc0045d6420) (0xc001d14280) Stream added, broadcasting: 5 I0514 21:26:25.601296 6 log.go:172] (0xc0045d6420) Reply frame received for 5 I0514 21:26:25.663010 6 log.go:172] (0xc0045d6420) Data frame received for 3 I0514 21:26:25.663039 6 log.go:172] (0xc0022b6e60) (3) Data frame handling I0514 21:26:25.663051 6 log.go:172] (0xc0022b6e60) (3) Data frame sent I0514 21:26:25.663453 6 log.go:172] (0xc0045d6420) Data frame received for 3 I0514 21:26:25.663463 6 log.go:172] (0xc0022b6e60) (3) Data frame handling I0514 21:26:25.663480 6 log.go:172] (0xc0045d6420) Data frame received for 5 I0514 21:26:25.663491 6 log.go:172] (0xc001d14280) (5) Data frame handling I0514 21:26:25.665101 6 log.go:172] (0xc0045d6420) Data frame received for 1 I0514 21:26:25.665272 6 log.go:172] (0xc001d14140) (1) Data frame handling I0514 21:26:25.665283 6 log.go:172] (0xc001d14140) (1) Data frame sent I0514 21:26:25.665293 6 log.go:172] (0xc0045d6420) (0xc001d14140) Stream removed, broadcasting: 1 I0514 21:26:25.665432 6 log.go:172] (0xc0045d6420) (0xc001d14140) Stream removed, broadcasting: 1 I0514 21:26:25.665443 6 log.go:172] (0xc0045d6420) (0xc0022b6e60) Stream removed, broadcasting: 3 I0514 21:26:25.665588 6 log.go:172] (0xc0045d6420) (0xc001d14280) Stream removed, broadcasting: 5 May 14 21:26:25.665: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:26:25.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4941" for this suite. • [SLOW TEST:28.649 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":441,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:26:25.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 21:26:26.587: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 21:26:28.814: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088386, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088386, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088386, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088386, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 21:26:30.830: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088386, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088386, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088386, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088386, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 21:26:33.877: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:26:34.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8371" for this suite. STEP: Destroying namespace "webhook-8371-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.534 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":29,"skipped":454,"failed":0} [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:26:34.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 14 21:26:34.263: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:26:40.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1444" for this suite. • [SLOW TEST:6.180 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":30,"skipped":454,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:26:40.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 21:26:41.479: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 21:26:43.489: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088401, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088401, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088401, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088401, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 21:26:45.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088401, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088401, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088401, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088401, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 21:26:48.520: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:26:48.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:26:49.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6132" for this suite. STEP: Destroying namespace "webhook-6132-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.353 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":31,"skipped":481,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:26:49.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 14 21:26:49.832: INFO: Waiting up to 5m0s for pod "downwardapi-volume-992c0fc1-37be-45fd-9fdd-c0e26d52183d" in namespace "downward-api-5690" to be "success or failure" May 14 21:26:49.843: INFO: Pod "downwardapi-volume-992c0fc1-37be-45fd-9fdd-c0e26d52183d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.060853ms May 14 21:26:51.862: INFO: Pod "downwardapi-volume-992c0fc1-37be-45fd-9fdd-c0e26d52183d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029382515s May 14 21:26:53.865: INFO: Pod "downwardapi-volume-992c0fc1-37be-45fd-9fdd-c0e26d52183d": Phase="Running", Reason="", readiness=true. Elapsed: 4.032842104s May 14 21:26:55.869: INFO: Pod "downwardapi-volume-992c0fc1-37be-45fd-9fdd-c0e26d52183d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037019339s STEP: Saw pod success May 14 21:26:55.870: INFO: Pod "downwardapi-volume-992c0fc1-37be-45fd-9fdd-c0e26d52183d" satisfied condition "success or failure" May 14 21:26:55.873: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-992c0fc1-37be-45fd-9fdd-c0e26d52183d container client-container: STEP: delete the pod May 14 21:26:55.906: INFO: Waiting for pod downwardapi-volume-992c0fc1-37be-45fd-9fdd-c0e26d52183d to disappear May 14 21:26:55.910: INFO: Pod downwardapi-volume-992c0fc1-37be-45fd-9fdd-c0e26d52183d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:26:55.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5690" for this suite. • [SLOW TEST:6.158 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":527,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:26:55.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 14 21:27:00.654: INFO: Successfully updated pod "annotationupdateefa1bd17-ab1b-443d-a71b-ccd37270e302" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:27:02.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1632" for this suite. • [SLOW TEST:6.771 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":547,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:27:02.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 14 21:27:02.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3036' May 14 21:27:03.008: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 14 21:27:03.008: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 14 21:27:05.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3036' May 14 21:27:05.601: INFO: stderr: "" May 14 21:27:05.602: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:27:05.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3036" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":34,"skipped":549,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:27:05.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-af61b9ba-6917-4aa0-91ea-9d6141a52780 in namespace container-probe-1471 May 14 21:27:12.098: INFO: Started pod liveness-af61b9ba-6917-4aa0-91ea-9d6141a52780 in namespace container-probe-1471 STEP: checking the pod's current state and verifying that restartCount is present May 14 21:27:12.101: INFO: Initial restart count of pod liveness-af61b9ba-6917-4aa0-91ea-9d6141a52780 is 0 May 14 21:27:28.137: INFO: Restart count of pod container-probe-1471/liveness-af61b9ba-6917-4aa0-91ea-9d6141a52780 is now 1 (16.036568214s elapsed) May 14 21:27:48.178: INFO: Restart count of pod container-probe-1471/liveness-af61b9ba-6917-4aa0-91ea-9d6141a52780 is now 2 (36.076701185s elapsed) May 14 21:28:08.238: INFO: Restart count of pod container-probe-1471/liveness-af61b9ba-6917-4aa0-91ea-9d6141a52780 is now 3 (56.137170904s elapsed) May 14 21:28:28.430: INFO: Restart count of pod container-probe-1471/liveness-af61b9ba-6917-4aa0-91ea-9d6141a52780 is now 4 (1m16.328724339s elapsed) May 14 21:29:28.610: INFO: Restart count of pod container-probe-1471/liveness-af61b9ba-6917-4aa0-91ea-9d6141a52780 is now 5 (2m16.509161874s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:29:28.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1471" for this suite. • [SLOW TEST:142.886 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":566,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:29:28.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:29:39.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7518" for this suite. • [SLOW TEST:11.321 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":36,"skipped":579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:29:39.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 14 21:29:40.053: INFO: Created pod &Pod{ObjectMeta:{dns-2777 dns-2777 /api/v1/namespaces/dns-2777/pods/dns-2777 0073ad07-61f5-41b0-bd32-cd8f9357f676 16203857 0 2020-05-14 21:29:40 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p8528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p8528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p8528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 14 21:29:44.102: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2777 PodName:dns-2777 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 21:29:44.102: INFO: >>> kubeConfig: /root/.kube/config I0514 21:29:44.135152 6 log.go:172] (0xc0028ae4d0) (0xc002376140) Create stream I0514 21:29:44.135177 6 log.go:172] (0xc0028ae4d0) (0xc002376140) Stream added, broadcasting: 1 I0514 21:29:44.137426 6 log.go:172] (0xc0028ae4d0) Reply frame received for 1 I0514 21:29:44.137481 6 log.go:172] (0xc0028ae4d0) (0xc00224f400) Create stream I0514 21:29:44.137494 6 log.go:172] (0xc0028ae4d0) (0xc00224f400) Stream added, broadcasting: 3 I0514 21:29:44.138320 6 log.go:172] (0xc0028ae4d0) Reply frame received for 3 I0514 21:29:44.138378 6 log.go:172] (0xc0028ae4d0) (0xc0023761e0) Create stream I0514 21:29:44.138400 6 log.go:172] (0xc0028ae4d0) (0xc0023761e0) Stream added, broadcasting: 5 I0514 21:29:44.139188 6 log.go:172] (0xc0028ae4d0) Reply frame received for 5 I0514 21:29:44.246230 6 log.go:172] (0xc0028ae4d0) Data frame received for 3 I0514 21:29:44.246271 6 log.go:172] (0xc00224f400) (3) Data frame handling I0514 21:29:44.246309 6 log.go:172] (0xc00224f400) (3) Data frame sent I0514 21:29:44.246712 6 log.go:172] (0xc0028ae4d0) Data frame received for 3 I0514 21:29:44.246728 6 log.go:172] (0xc00224f400) (3) Data frame handling I0514 21:29:44.246799 6 log.go:172] (0xc0028ae4d0) Data frame received for 5 I0514 21:29:44.246811 6 log.go:172] (0xc0023761e0) (5) Data frame handling I0514 21:29:44.247917 6 log.go:172] (0xc0028ae4d0) Data frame received for 1 I0514 21:29:44.247930 6 log.go:172] (0xc002376140) (1) Data frame handling I0514 21:29:44.247942 6 log.go:172] (0xc002376140) (1) Data frame sent I0514 21:29:44.248125 6 log.go:172] (0xc0028ae4d0) (0xc002376140) Stream removed, broadcasting: 1 I0514 21:29:44.248194 6 log.go:172] (0xc0028ae4d0) (0xc002376140) Stream removed, broadcasting: 1 I0514 21:29:44.248207 6 log.go:172] (0xc0028ae4d0) (0xc00224f400) Stream removed, broadcasting: 3 I0514 21:29:44.248215 6 log.go:172] (0xc0028ae4d0) (0xc0023761e0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 14 21:29:44.248: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2777 PodName:dns-2777 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 21:29:44.248: INFO: >>> kubeConfig: /root/.kube/config I0514 21:29:44.248383 6 log.go:172] (0xc0028ae4d0) Go away received I0514 21:29:44.276097 6 log.go:172] (0xc002794370) (0xc00224f7c0) Create stream I0514 21:29:44.276116 6 log.go:172] (0xc002794370) (0xc00224f7c0) Stream added, broadcasting: 1 I0514 21:29:44.277622 6 log.go:172] (0xc002794370) Reply frame received for 1 I0514 21:29:44.277659 6 log.go:172] (0xc002794370) (0xc0027ae000) Create stream I0514 21:29:44.277675 6 log.go:172] (0xc002794370) (0xc0027ae000) Stream added, broadcasting: 3 I0514 21:29:44.278451 6 log.go:172] (0xc002794370) Reply frame received for 3 I0514 21:29:44.278472 6 log.go:172] (0xc002794370) (0xc0027ae140) Create stream I0514 21:29:44.278483 6 log.go:172] (0xc002794370) (0xc0027ae140) Stream added, broadcasting: 5 I0514 21:29:44.279112 6 log.go:172] (0xc002794370) Reply frame received for 5 I0514 21:29:44.348601 6 log.go:172] (0xc002794370) Data frame received for 3 I0514 21:29:44.348617 6 log.go:172] (0xc0027ae000) (3) Data frame handling I0514 21:29:44.348629 6 log.go:172] (0xc0027ae000) (3) Data frame sent I0514 21:29:44.349782 6 log.go:172] (0xc002794370) Data frame received for 3 I0514 21:29:44.349838 6 log.go:172] (0xc0027ae000) (3) Data frame handling I0514 21:29:44.350578 6 log.go:172] (0xc002794370) Data frame received for 5 I0514 21:29:44.350596 6 log.go:172] (0xc0027ae140) (5) Data frame handling I0514 21:29:44.351527 6 log.go:172] (0xc002794370) Data frame received for 1 I0514 21:29:44.351548 6 log.go:172] (0xc00224f7c0) (1) Data frame handling I0514 21:29:44.351562 6 log.go:172] (0xc00224f7c0) (1) Data frame sent I0514 21:29:44.351585 6 log.go:172] (0xc002794370) (0xc00224f7c0) Stream removed, broadcasting: 1 I0514 21:29:44.351607 6 log.go:172] (0xc002794370) Go away received I0514 21:29:44.351746 6 log.go:172] (0xc002794370) (0xc00224f7c0) Stream removed, broadcasting: 1 I0514 21:29:44.351775 6 log.go:172] (0xc002794370) (0xc0027ae000) Stream removed, broadcasting: 3 I0514 21:29:44.351791 6 log.go:172] (0xc002794370) (0xc0027ae140) Stream removed, broadcasting: 5 May 14 21:29:44.351: INFO: Deleting pod dns-2777... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:29:44.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2777" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":37,"skipped":611,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:29:44.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-f7a4682e-362d-443e-8978-70c10822c97c STEP: Creating a pod to test consume secrets May 14 21:29:44.911: INFO: Waiting up to 5m0s for pod "pod-secrets-1bfc36b9-6949-4b00-9202-64166260e2e8" in namespace "secrets-1600" to be "success or failure" May 14 21:29:44.977: INFO: Pod "pod-secrets-1bfc36b9-6949-4b00-9202-64166260e2e8": Phase="Pending", Reason="", readiness=false. Elapsed: 65.52257ms May 14 21:29:46.981: INFO: Pod "pod-secrets-1bfc36b9-6949-4b00-9202-64166260e2e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069933495s May 14 21:29:48.985: INFO: Pod "pod-secrets-1bfc36b9-6949-4b00-9202-64166260e2e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074227038s May 14 21:29:50.989: INFO: Pod "pod-secrets-1bfc36b9-6949-4b00-9202-64166260e2e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.078052634s STEP: Saw pod success May 14 21:29:50.989: INFO: Pod "pod-secrets-1bfc36b9-6949-4b00-9202-64166260e2e8" satisfied condition "success or failure" May 14 21:29:50.992: INFO: Trying to get logs from node jerma-worker pod pod-secrets-1bfc36b9-6949-4b00-9202-64166260e2e8 container secret-volume-test: STEP: delete the pod May 14 21:29:51.039: INFO: Waiting for pod pod-secrets-1bfc36b9-6949-4b00-9202-64166260e2e8 to disappear May 14 21:29:51.044: INFO: Pod pod-secrets-1bfc36b9-6949-4b00-9202-64166260e2e8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:29:51.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1600" for this suite. • [SLOW TEST:6.599 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":612,"failed":0} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:29:51.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 14 21:29:55.159: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:29:55.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8434" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":612,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:29:55.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:29:55.279: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:29:55.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8228" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":40,"skipped":620,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:29:55.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 14 21:29:56.065: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0c78c4cc-2e1d-4e03-b1b9-840f662d08d0" in namespace "projected-7158" to be "success or failure" May 14 21:29:56.081: INFO: Pod "downwardapi-volume-0c78c4cc-2e1d-4e03-b1b9-840f662d08d0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.534862ms May 14 21:29:58.084: INFO: Pod "downwardapi-volume-0c78c4cc-2e1d-4e03-b1b9-840f662d08d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019573067s May 14 21:30:00.088: INFO: Pod "downwardapi-volume-0c78c4cc-2e1d-4e03-b1b9-840f662d08d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02372924s STEP: Saw pod success May 14 21:30:00.088: INFO: Pod "downwardapi-volume-0c78c4cc-2e1d-4e03-b1b9-840f662d08d0" satisfied condition "success or failure" May 14 21:30:00.092: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0c78c4cc-2e1d-4e03-b1b9-840f662d08d0 container client-container: STEP: delete the pod May 14 21:30:00.136: INFO: Waiting for pod downwardapi-volume-0c78c4cc-2e1d-4e03-b1b9-840f662d08d0 to disappear May 14 21:30:00.157: INFO: Pod downwardapi-volume-0c78c4cc-2e1d-4e03-b1b9-840f662d08d0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:30:00.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7158" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":633,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:30:00.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 14 21:30:00.613: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 14 21:30:02.706: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088600, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088600, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088600, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088600, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 21:30:05.769: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:30:05.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:30:07.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-301" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.946 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":42,"skipped":639,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:30:07.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:30:07.152: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:30:11.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7565" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":675,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:30:11.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:30:11.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-4680" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":44,"skipped":707,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:30:11.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4249 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4249 I0514 21:30:11.423121 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4249, replica count: 2 I0514 21:30:14.473466 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 21:30:17.473631 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 14 21:30:17.473: INFO: Creating new exec pod May 14 21:30:22.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4249 execpodz7pb9 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 14 21:30:22.748: INFO: stderr: "I0514 21:30:22.634542 1248 log.go:172] (0xc0000f53f0) (0xc00063b9a0) Create stream\nI0514 21:30:22.634609 1248 log.go:172] (0xc0000f53f0) (0xc00063b9a0) Stream added, broadcasting: 1\nI0514 21:30:22.637824 1248 log.go:172] (0xc0000f53f0) Reply frame received for 1\nI0514 21:30:22.637860 1248 log.go:172] (0xc0000f53f0) (0xc0005c6000) Create stream\nI0514 21:30:22.637879 1248 log.go:172] (0xc0000f53f0) (0xc0005c6000) Stream added, broadcasting: 3\nI0514 21:30:22.638814 1248 log.go:172] (0xc0000f53f0) Reply frame received for 3\nI0514 21:30:22.638838 1248 log.go:172] (0xc0000f53f0) (0xc0005c6140) Create stream\nI0514 21:30:22.638845 1248 log.go:172] (0xc0000f53f0) (0xc0005c6140) Stream added, broadcasting: 5\nI0514 21:30:22.640087 1248 log.go:172] (0xc0000f53f0) Reply frame received for 5\nI0514 21:30:22.736479 1248 log.go:172] (0xc0000f53f0) Data frame received for 5\nI0514 21:30:22.736514 1248 log.go:172] (0xc0005c6140) (5) Data frame handling\nI0514 21:30:22.736540 1248 log.go:172] (0xc0005c6140) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0514 21:30:22.742096 1248 log.go:172] (0xc0000f53f0) Data frame received for 5\nI0514 21:30:22.742136 1248 log.go:172] (0xc0005c6140) (5) Data frame handling\nI0514 21:30:22.742172 1248 log.go:172] (0xc0005c6140) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0514 21:30:22.742283 1248 log.go:172] (0xc0000f53f0) Data frame received for 5\nI0514 21:30:22.742308 1248 log.go:172] (0xc0005c6140) (5) Data frame handling\nI0514 21:30:22.743046 1248 log.go:172] (0xc0000f53f0) Data frame received for 3\nI0514 21:30:22.743065 1248 log.go:172] (0xc0005c6000) (3) Data frame handling\nI0514 21:30:22.744069 1248 log.go:172] (0xc0000f53f0) Data frame received for 1\nI0514 21:30:22.744083 1248 log.go:172] (0xc00063b9a0) (1) Data frame handling\nI0514 21:30:22.744102 1248 log.go:172] (0xc00063b9a0) (1) Data frame sent\nI0514 21:30:22.744119 1248 log.go:172] (0xc0000f53f0) (0xc00063b9a0) Stream removed, broadcasting: 1\nI0514 21:30:22.744175 1248 log.go:172] (0xc0000f53f0) Go away received\nI0514 21:30:22.744391 1248 log.go:172] (0xc0000f53f0) (0xc00063b9a0) Stream removed, broadcasting: 1\nI0514 21:30:22.744406 1248 log.go:172] (0xc0000f53f0) (0xc0005c6000) Stream removed, broadcasting: 3\nI0514 21:30:22.744411 1248 log.go:172] (0xc0000f53f0) (0xc0005c6140) Stream removed, broadcasting: 5\n" May 14 21:30:22.748: INFO: stdout: "" May 14 21:30:22.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4249 execpodz7pb9 -- /bin/sh -x -c nc -zv -t -w 2 10.106.64.16 80' May 14 21:30:22.975: INFO: stderr: "I0514 21:30:22.888673 1271 log.go:172] (0xc000a29550) (0xc000725ae0) Create stream\nI0514 21:30:22.888747 1271 log.go:172] (0xc000a29550) (0xc000725ae0) Stream added, broadcasting: 1\nI0514 21:30:22.896196 1271 log.go:172] (0xc000a29550) Reply frame received for 1\nI0514 21:30:22.896251 1271 log.go:172] (0xc000a29550) (0xc0009f4500) Create stream\nI0514 21:30:22.896266 1271 log.go:172] (0xc000a29550) (0xc0009f4500) Stream added, broadcasting: 3\nI0514 21:30:22.897839 1271 log.go:172] (0xc000a29550) Reply frame received for 3\nI0514 21:30:22.897941 1271 log.go:172] (0xc000a29550) (0xc0009f45a0) Create stream\nI0514 21:30:22.897994 1271 log.go:172] (0xc000a29550) (0xc0009f45a0) Stream added, broadcasting: 5\nI0514 21:30:22.902710 1271 log.go:172] (0xc000a29550) Reply frame received for 5\nI0514 21:30:22.969425 1271 log.go:172] (0xc000a29550) Data frame received for 5\nI0514 21:30:22.969470 1271 log.go:172] (0xc0009f45a0) (5) Data frame handling\nI0514 21:30:22.969492 1271 log.go:172] (0xc0009f45a0) (5) Data frame sent\nI0514 21:30:22.969509 1271 log.go:172] (0xc000a29550) Data frame received for 5\nI0514 21:30:22.969519 1271 log.go:172] (0xc0009f45a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.64.16 80\nConnection to 10.106.64.16 80 port [tcp/http] succeeded!\nI0514 21:30:22.969538 1271 log.go:172] (0xc000a29550) Data frame received for 3\nI0514 21:30:22.969559 1271 log.go:172] (0xc0009f4500) (3) Data frame handling\nI0514 21:30:22.971083 1271 log.go:172] (0xc000a29550) Data frame received for 1\nI0514 21:30:22.971172 1271 log.go:172] (0xc000725ae0) (1) Data frame handling\nI0514 21:30:22.971217 1271 log.go:172] (0xc000725ae0) (1) Data frame sent\nI0514 21:30:22.971254 1271 log.go:172] (0xc000a29550) (0xc000725ae0) Stream removed, broadcasting: 1\nI0514 21:30:22.971285 1271 log.go:172] (0xc000a29550) Go away received\nI0514 21:30:22.971727 1271 log.go:172] (0xc000a29550) (0xc000725ae0) Stream removed, broadcasting: 1\nI0514 21:30:22.971749 1271 log.go:172] (0xc000a29550) (0xc0009f4500) Stream removed, broadcasting: 3\nI0514 21:30:22.971761 1271 log.go:172] (0xc000a29550) (0xc0009f45a0) Stream removed, broadcasting: 5\n" May 14 21:30:22.975: INFO: stdout: "" May 14 21:30:22.975: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:30:22.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4249" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.731 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":45,"skipped":709,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:30:23.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:30:23.135: INFO: Create a RollingUpdate DaemonSet May 14 21:30:23.139: INFO: Check that daemon pods launch on every node of the cluster May 14 21:30:23.142: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:30:23.147: INFO: Number of nodes with available pods: 0 May 14 21:30:23.147: INFO: Node jerma-worker is running more than one daemon pod May 14 21:30:24.151: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:30:24.154: INFO: Number of nodes with available pods: 0 May 14 21:30:24.154: INFO: Node jerma-worker is running more than one daemon pod May 14 21:30:25.151: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:30:25.154: INFO: Number of nodes with available pods: 0 May 14 21:30:25.154: INFO: Node jerma-worker is running more than one daemon pod May 14 21:30:26.161: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:30:26.163: INFO: Number of nodes with available pods: 0 May 14 21:30:26.163: INFO: Node jerma-worker is running more than one daemon pod May 14 21:30:27.164: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:30:27.167: INFO: Number of nodes with available pods: 0 May 14 21:30:27.167: INFO: Node jerma-worker is running more than one daemon pod May 14 21:30:28.155: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:30:28.179: INFO: Number of nodes with available pods: 2 May 14 21:30:28.179: INFO: Number of running nodes: 2, number of available pods: 2 May 14 21:30:28.179: INFO: Update the DaemonSet to trigger a rollout May 14 21:30:28.218: INFO: Updating DaemonSet daemon-set May 14 21:30:40.271: INFO: Roll back the DaemonSet before rollout is complete May 14 21:30:40.278: INFO: Updating DaemonSet daemon-set May 14 21:30:40.278: INFO: Make sure DaemonSet rollback is complete May 14 21:30:40.286: INFO: Wrong image for pod: daemon-set-6vs42. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 14 21:30:40.286: INFO: Pod daemon-set-6vs42 is not available May 14 21:30:40.309: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:30:41.314: INFO: Wrong image for pod: daemon-set-6vs42. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 14 21:30:41.314: INFO: Pod daemon-set-6vs42 is not available May 14 21:30:41.319: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:30:42.332: INFO: Pod daemon-set-x6s4r is not available May 14 21:30:42.347: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8544, will wait for the garbage collector to delete the pods May 14 21:30:42.416: INFO: Deleting DaemonSet.extensions daemon-set took: 8.822175ms May 14 21:30:42.716: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.283587ms May 14 21:30:49.519: INFO: Number of nodes with available pods: 0 May 14 21:30:49.519: INFO: Number of running nodes: 0, number of available pods: 0 May 14 21:30:49.541: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8544/daemonsets","resourceVersion":"16204414"},"items":null} May 14 21:30:49.542: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8544/pods","resourceVersion":"16204414"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:30:49.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8544" for this suite. • [SLOW TEST:26.535 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":46,"skipped":712,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:30:49.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 14 21:30:49.609: INFO: Pod name pod-release: Found 0 pods out of 1 May 14 21:30:54.637: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:30:54.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9724" for this suite. • [SLOW TEST:5.233 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":47,"skipped":740,"failed":0} SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:30:54.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 14 21:30:55.775: INFO: Pod name wrapped-volume-race-c5934d63-19f9-4f46-a103-19c2b7f86c64: Found 0 pods out of 5 May 14 21:31:01.027: INFO: Pod name wrapped-volume-race-c5934d63-19f9-4f46-a103-19c2b7f86c64: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c5934d63-19f9-4f46-a103-19c2b7f86c64 in namespace emptydir-wrapper-7464, will wait for the garbage collector to delete the pods May 14 21:31:15.142: INFO: Deleting ReplicationController wrapped-volume-race-c5934d63-19f9-4f46-a103-19c2b7f86c64 took: 4.608346ms May 14 21:31:15.442: INFO: Terminating ReplicationController wrapped-volume-race-c5934d63-19f9-4f46-a103-19c2b7f86c64 pods took: 300.219398ms STEP: Creating RC which spawns configmap-volume pods May 14 21:31:29.858: INFO: Pod name wrapped-volume-race-2f9e6afc-3e44-4bb8-8110-b7cba823b38f: Found 0 pods out of 5 May 14 21:31:34.865: INFO: Pod name wrapped-volume-race-2f9e6afc-3e44-4bb8-8110-b7cba823b38f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2f9e6afc-3e44-4bb8-8110-b7cba823b38f in namespace emptydir-wrapper-7464, will wait for the garbage collector to delete the pods May 14 21:31:49.079: INFO: Deleting ReplicationController wrapped-volume-race-2f9e6afc-3e44-4bb8-8110-b7cba823b38f took: 12.089889ms May 14 21:31:49.379: INFO: Terminating ReplicationController wrapped-volume-race-2f9e6afc-3e44-4bb8-8110-b7cba823b38f pods took: 300.274691ms STEP: Creating RC which spawns configmap-volume pods May 14 21:32:00.462: INFO: Pod name wrapped-volume-race-64c55b36-ed06-456e-bd63-d47c47dca53f: Found 0 pods out of 5 May 14 21:32:05.468: INFO: Pod name wrapped-volume-race-64c55b36-ed06-456e-bd63-d47c47dca53f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-64c55b36-ed06-456e-bd63-d47c47dca53f in namespace emptydir-wrapper-7464, will wait for the garbage collector to delete the pods May 14 21:32:17.997: INFO: Deleting ReplicationController wrapped-volume-race-64c55b36-ed06-456e-bd63-d47c47dca53f took: 7.546462ms May 14 21:32:18.298: INFO: Terminating ReplicationController wrapped-volume-race-64c55b36-ed06-456e-bd63-d47c47dca53f pods took: 300.209768ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:32:31.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7464" for this suite. • [SLOW TEST:96.357 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":48,"skipped":742,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:32:31.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-2295bc03-a7b7-4b52-b550-20c8b57df505 STEP: Creating a pod to test consume configMaps May 14 21:32:31.239: INFO: Waiting up to 5m0s for pod "pod-configmaps-c1dbea0b-5ed3-4f4d-aded-cb316e1e736b" in namespace "configmap-2337" to be "success or failure" May 14 21:32:31.244: INFO: Pod "pod-configmaps-c1dbea0b-5ed3-4f4d-aded-cb316e1e736b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.462347ms May 14 21:32:33.249: INFO: Pod "pod-configmaps-c1dbea0b-5ed3-4f4d-aded-cb316e1e736b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010798967s May 14 21:32:35.254: INFO: Pod "pod-configmaps-c1dbea0b-5ed3-4f4d-aded-cb316e1e736b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015336803s STEP: Saw pod success May 14 21:32:35.254: INFO: Pod "pod-configmaps-c1dbea0b-5ed3-4f4d-aded-cb316e1e736b" satisfied condition "success or failure" May 14 21:32:35.257: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-c1dbea0b-5ed3-4f4d-aded-cb316e1e736b container configmap-volume-test: STEP: delete the pod May 14 21:32:35.304: INFO: Waiting for pod pod-configmaps-c1dbea0b-5ed3-4f4d-aded-cb316e1e736b to disappear May 14 21:32:35.447: INFO: Pod pod-configmaps-c1dbea0b-5ed3-4f4d-aded-cb316e1e736b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:32:35.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2337" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":787,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:32:35.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 14 21:32:35.762: INFO: Waiting up to 5m0s for pod "var-expansion-aad94e80-e9c8-4213-b340-106c7e04c492" in namespace "var-expansion-8619" to be "success or failure" May 14 21:32:35.789: INFO: Pod "var-expansion-aad94e80-e9c8-4213-b340-106c7e04c492": Phase="Pending", Reason="", readiness=false. Elapsed: 27.705907ms May 14 21:32:37.981: INFO: Pod "var-expansion-aad94e80-e9c8-4213-b340-106c7e04c492": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219341192s May 14 21:32:40.040: INFO: Pod "var-expansion-aad94e80-e9c8-4213-b340-106c7e04c492": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.278660796s STEP: Saw pod success May 14 21:32:40.040: INFO: Pod "var-expansion-aad94e80-e9c8-4213-b340-106c7e04c492" satisfied condition "success or failure" May 14 21:32:40.120: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-aad94e80-e9c8-4213-b340-106c7e04c492 container dapi-container: STEP: delete the pod May 14 21:32:40.324: INFO: Waiting for pod var-expansion-aad94e80-e9c8-4213-b340-106c7e04c492 to disappear May 14 21:32:40.328: INFO: Pod var-expansion-aad94e80-e9c8-4213-b340-106c7e04c492 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:32:40.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8619" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":816,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:32:40.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-a98bed38-446a-4d0e-93c7-b315e18d6392 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-a98bed38-446a-4d0e-93c7-b315e18d6392 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:32:46.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3842" for this suite. • [SLOW TEST:6.471 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":859,"failed":0} SSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:32:46.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 14 21:32:51.987: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:32:52.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3022" for this suite. • [SLOW TEST:5.677 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":52,"skipped":862,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:32:52.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 14 21:32:52.920: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2092 /api/v1/namespaces/watch-2092/configmaps/e2e-watch-test-configmap-a d74fa5dd-7aeb-4e38-a133-e881f8bc2550 16205777 0 2020-05-14 21:32:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 14 21:32:52.921: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2092 /api/v1/namespaces/watch-2092/configmaps/e2e-watch-test-configmap-a d74fa5dd-7aeb-4e38-a133-e881f8bc2550 16205777 0 2020-05-14 21:32:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 14 21:33:02.929: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2092 /api/v1/namespaces/watch-2092/configmaps/e2e-watch-test-configmap-a d74fa5dd-7aeb-4e38-a133-e881f8bc2550 16205828 0 2020-05-14 21:32:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 14 21:33:02.929: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2092 /api/v1/namespaces/watch-2092/configmaps/e2e-watch-test-configmap-a d74fa5dd-7aeb-4e38-a133-e881f8bc2550 16205828 0 2020-05-14 21:32:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 14 21:33:12.938: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2092 /api/v1/namespaces/watch-2092/configmaps/e2e-watch-test-configmap-a d74fa5dd-7aeb-4e38-a133-e881f8bc2550 16205860 0 2020-05-14 21:32:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 14 21:33:12.938: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2092 /api/v1/namespaces/watch-2092/configmaps/e2e-watch-test-configmap-a d74fa5dd-7aeb-4e38-a133-e881f8bc2550 16205860 0 2020-05-14 21:32:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 14 21:33:22.947: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2092 /api/v1/namespaces/watch-2092/configmaps/e2e-watch-test-configmap-a d74fa5dd-7aeb-4e38-a133-e881f8bc2550 16205894 0 2020-05-14 21:32:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 14 21:33:22.947: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2092 /api/v1/namespaces/watch-2092/configmaps/e2e-watch-test-configmap-a d74fa5dd-7aeb-4e38-a133-e881f8bc2550 16205894 0 2020-05-14 21:32:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 14 21:33:32.955: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2092 /api/v1/namespaces/watch-2092/configmaps/e2e-watch-test-configmap-b 5b3f19bc-6567-4ca8-9585-832a1a0bbffd 16205924 0 2020-05-14 21:33:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 14 21:33:32.955: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2092 /api/v1/namespaces/watch-2092/configmaps/e2e-watch-test-configmap-b 5b3f19bc-6567-4ca8-9585-832a1a0bbffd 16205924 0 2020-05-14 21:33:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 14 21:33:42.962: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2092 /api/v1/namespaces/watch-2092/configmaps/e2e-watch-test-configmap-b 5b3f19bc-6567-4ca8-9585-832a1a0bbffd 16205954 0 2020-05-14 21:33:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 14 21:33:42.962: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2092 /api/v1/namespaces/watch-2092/configmaps/e2e-watch-test-configmap-b 5b3f19bc-6567-4ca8-9585-832a1a0bbffd 16205954 0 2020-05-14 21:33:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:33:52.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2092" for this suite. • [SLOW TEST:60.457 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":53,"skipped":872,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:33:52.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 14 21:33:53.054: INFO: Waiting up to 5m0s for pod "downward-api-2351f401-3e0b-407a-97b4-90591678c5f9" in namespace "downward-api-3313" to be "success or failure" May 14 21:33:53.059: INFO: Pod "downward-api-2351f401-3e0b-407a-97b4-90591678c5f9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.444103ms May 14 21:33:55.064: INFO: Pod "downward-api-2351f401-3e0b-407a-97b4-90591678c5f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009675654s May 14 21:33:57.068: INFO: Pod "downward-api-2351f401-3e0b-407a-97b4-90591678c5f9": Phase="Running", Reason="", readiness=true. Elapsed: 4.01367391s May 14 21:33:59.072: INFO: Pod "downward-api-2351f401-3e0b-407a-97b4-90591678c5f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018198393s STEP: Saw pod success May 14 21:33:59.072: INFO: Pod "downward-api-2351f401-3e0b-407a-97b4-90591678c5f9" satisfied condition "success or failure" May 14 21:33:59.076: INFO: Trying to get logs from node jerma-worker2 pod downward-api-2351f401-3e0b-407a-97b4-90591678c5f9 container dapi-container: STEP: delete the pod May 14 21:33:59.104: INFO: Waiting for pod downward-api-2351f401-3e0b-407a-97b4-90591678c5f9 to disappear May 14 21:33:59.108: INFO: Pod downward-api-2351f401-3e0b-407a-97b4-90591678c5f9 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:33:59.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3313" for this suite. • [SLOW TEST:6.144 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":880,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:33:59.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4676.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4676.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 21:34:05.253: INFO: DNS probes using dns-4676/dns-test-9ac9703e-4cc5-46bd-9d9e-a20d19eb0745 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:34:05.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4676" for this suite. • [SLOW TEST:6.257 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":55,"skipped":900,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:34:05.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:34:13.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2245" for this suite. • [SLOW TEST:7.745 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":56,"skipped":932,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:34:13.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 14 21:34:13.171: INFO: Waiting up to 5m0s for pod "pod-bf1c26a9-6abd-45c2-a7a2-66ac8070a782" in namespace "emptydir-8093" to be "success or failure" May 14 21:34:13.194: INFO: Pod "pod-bf1c26a9-6abd-45c2-a7a2-66ac8070a782": Phase="Pending", Reason="", readiness=false. Elapsed: 22.356946ms May 14 21:34:15.198: INFO: Pod "pod-bf1c26a9-6abd-45c2-a7a2-66ac8070a782": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026712357s May 14 21:34:17.227: INFO: Pod "pod-bf1c26a9-6abd-45c2-a7a2-66ac8070a782": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05567496s STEP: Saw pod success May 14 21:34:17.227: INFO: Pod "pod-bf1c26a9-6abd-45c2-a7a2-66ac8070a782" satisfied condition "success or failure" May 14 21:34:17.230: INFO: Trying to get logs from node jerma-worker pod pod-bf1c26a9-6abd-45c2-a7a2-66ac8070a782 container test-container: STEP: delete the pod May 14 21:34:17.260: INFO: Waiting for pod pod-bf1c26a9-6abd-45c2-a7a2-66ac8070a782 to disappear May 14 21:34:17.270: INFO: Pod pod-bf1c26a9-6abd-45c2-a7a2-66ac8070a782 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:34:17.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8093" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":932,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:34:17.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:34:21.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-372" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":940,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:34:21.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:34:21.665: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"bcf19083-7820-43a9-8eb8-f4fde94704aa", Controller:(*bool)(0xc002cfb66a), BlockOwnerDeletion:(*bool)(0xc002cfb66b)}} May 14 21:34:21.672: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"646319ce-acc5-4339-9d75-5ea79e7e521f", Controller:(*bool)(0xc0030bcd82), BlockOwnerDeletion:(*bool)(0xc0030bcd83)}} May 14 21:34:21.731: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b2487551-ccb7-41d2-9979-11075aaa7a20", Controller:(*bool)(0xc000d683aa), BlockOwnerDeletion:(*bool)(0xc000d683ab)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:34:26.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9765" for this suite. • [SLOW TEST:5.438 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":59,"skipped":945,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:34:26.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-e9f73e2f-7cfa-43ae-a244-66ef522ceda5 STEP: Creating a pod to test consume secrets May 14 21:34:26.950: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-45c7e068-8541-4dde-bbcf-9591ef252f73" in namespace "projected-7929" to be "success or failure" May 14 21:34:26.975: INFO: Pod "pod-projected-secrets-45c7e068-8541-4dde-bbcf-9591ef252f73": Phase="Pending", Reason="", readiness=false. Elapsed: 25.213382ms May 14 21:34:28.979: INFO: Pod "pod-projected-secrets-45c7e068-8541-4dde-bbcf-9591ef252f73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029703323s May 14 21:34:30.983: INFO: Pod "pod-projected-secrets-45c7e068-8541-4dde-bbcf-9591ef252f73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033188135s STEP: Saw pod success May 14 21:34:30.983: INFO: Pod "pod-projected-secrets-45c7e068-8541-4dde-bbcf-9591ef252f73" satisfied condition "success or failure" May 14 21:34:30.985: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-45c7e068-8541-4dde-bbcf-9591ef252f73 container projected-secret-volume-test: STEP: delete the pod May 14 21:34:31.022: INFO: Waiting for pod pod-projected-secrets-45c7e068-8541-4dde-bbcf-9591ef252f73 to disappear May 14 21:34:31.239: INFO: Pod pod-projected-secrets-45c7e068-8541-4dde-bbcf-9591ef252f73 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:34:31.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7929" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":985,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:34:31.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 14 21:34:35.997: INFO: Successfully updated pod "labelsupdate646ddd79-8cf9-4819-b36d-a0e4f8fb6b5e" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:34:40.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8763" for this suite. • [SLOW TEST:8.830 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":989,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:34:40.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 14 21:34:40.163: INFO: Waiting up to 5m0s for pod "pod-13c9feb5-097a-4369-a2cf-cebd47441cdf" in namespace "emptydir-6377" to be "success or failure" May 14 21:34:40.181: INFO: Pod "pod-13c9feb5-097a-4369-a2cf-cebd47441cdf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.180606ms May 14 21:34:42.184: INFO: Pod "pod-13c9feb5-097a-4369-a2cf-cebd47441cdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021429117s May 14 21:34:44.188: INFO: Pod "pod-13c9feb5-097a-4369-a2cf-cebd47441cdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025030895s STEP: Saw pod success May 14 21:34:44.188: INFO: Pod "pod-13c9feb5-097a-4369-a2cf-cebd47441cdf" satisfied condition "success or failure" May 14 21:34:44.190: INFO: Trying to get logs from node jerma-worker pod pod-13c9feb5-097a-4369-a2cf-cebd47441cdf container test-container: STEP: delete the pod May 14 21:34:44.269: INFO: Waiting for pod pod-13c9feb5-097a-4369-a2cf-cebd47441cdf to disappear May 14 21:34:44.300: INFO: Pod pod-13c9feb5-097a-4369-a2cf-cebd47441cdf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:34:44.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6377" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":994,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:34:44.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 21:34:45.003: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 21:34:47.066: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088885, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088885, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088885, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088884, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 21:34:49.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088885, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088885, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088885, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088884, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 21:34:52.102: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:34:52.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4250" for this suite. STEP: Destroying namespace "webhook-4250-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.937 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":63,"skipped":1021,"failed":0} [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:34:52.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 14 21:34:52.323: INFO: namespace kubectl-1977 May 14 21:34:52.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1977' May 14 21:34:52.664: INFO: stderr: "" May 14 21:34:52.664: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 14 21:34:53.668: INFO: Selector matched 1 pods for map[app:agnhost] May 14 21:34:53.668: INFO: Found 0 / 1 May 14 21:34:54.671: INFO: Selector matched 1 pods for map[app:agnhost] May 14 21:34:54.671: INFO: Found 0 / 1 May 14 21:34:55.668: INFO: Selector matched 1 pods for map[app:agnhost] May 14 21:34:55.668: INFO: Found 0 / 1 May 14 21:34:56.755: INFO: Selector matched 1 pods for map[app:agnhost] May 14 21:34:56.755: INFO: Found 1 / 1 May 14 21:34:56.755: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 14 21:34:56.770: INFO: Selector matched 1 pods for map[app:agnhost] May 14 21:34:56.770: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 14 21:34:56.770: INFO: wait on agnhost-master startup in kubectl-1977 May 14 21:34:56.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-k25sr agnhost-master --namespace=kubectl-1977' May 14 21:34:56.881: INFO: stderr: "" May 14 21:34:56.881: INFO: stdout: "Paused\n" STEP: exposing RC May 14 21:34:56.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1977' May 14 21:34:57.020: INFO: stderr: "" May 14 21:34:57.020: INFO: stdout: "service/rm2 exposed\n" May 14 21:34:57.027: INFO: Service rm2 in namespace kubectl-1977 found. STEP: exposing service May 14 21:34:59.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1977' May 14 21:34:59.167: INFO: stderr: "" May 14 21:34:59.167: INFO: stdout: "service/rm3 exposed\n" May 14 21:34:59.177: INFO: Service rm3 in namespace kubectl-1977 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:35:01.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1977" for this suite. • [SLOW TEST:8.917 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":64,"skipped":1021,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:35:01.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:35:12.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-390" for this suite. • [SLOW TEST:11.185 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":65,"skipped":1033,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:35:12.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-d0557eaf-9a0a-4035-be58-cfdc2fddf031 STEP: Creating a pod to test consume secrets May 14 21:35:12.646: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d58562cb-da22-4a64-ae32-aa975bd651e0" in namespace "projected-2277" to be "success or failure" May 14 21:35:12.677: INFO: Pod "pod-projected-secrets-d58562cb-da22-4a64-ae32-aa975bd651e0": Phase="Pending", Reason="", readiness=false. Elapsed: 31.496133ms May 14 21:35:14.681: INFO: Pod "pod-projected-secrets-d58562cb-da22-4a64-ae32-aa975bd651e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035657147s May 14 21:35:16.686: INFO: Pod "pod-projected-secrets-d58562cb-da22-4a64-ae32-aa975bd651e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039796858s STEP: Saw pod success May 14 21:35:16.686: INFO: Pod "pod-projected-secrets-d58562cb-da22-4a64-ae32-aa975bd651e0" satisfied condition "success or failure" May 14 21:35:16.689: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-d58562cb-da22-4a64-ae32-aa975bd651e0 container projected-secret-volume-test: STEP: delete the pod May 14 21:35:16.817: INFO: Waiting for pod pod-projected-secrets-d58562cb-da22-4a64-ae32-aa975bd651e0 to disappear May 14 21:35:16.906: INFO: Pod pod-projected-secrets-d58562cb-da22-4a64-ae32-aa975bd651e0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:35:16.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2277" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1049,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:35:16.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-aaf74ba2-7ee5-49f5-8dde-2e6bfd34a699 STEP: Creating a pod to test consume configMaps May 14 21:35:16.999: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-04ad87bd-9acf-49ef-92b2-5cf2db569aac" in namespace "projected-2849" to be "success or failure" May 14 21:35:17.002: INFO: Pod "pod-projected-configmaps-04ad87bd-9acf-49ef-92b2-5cf2db569aac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.883358ms May 14 21:35:19.019: INFO: Pod "pod-projected-configmaps-04ad87bd-9acf-49ef-92b2-5cf2db569aac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020060282s May 14 21:35:21.022: INFO: Pod "pod-projected-configmaps-04ad87bd-9acf-49ef-92b2-5cf2db569aac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023775811s STEP: Saw pod success May 14 21:35:21.022: INFO: Pod "pod-projected-configmaps-04ad87bd-9acf-49ef-92b2-5cf2db569aac" satisfied condition "success or failure" May 14 21:35:21.025: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-04ad87bd-9acf-49ef-92b2-5cf2db569aac container projected-configmap-volume-test: STEP: delete the pod May 14 21:35:21.057: INFO: Waiting for pod pod-projected-configmaps-04ad87bd-9acf-49ef-92b2-5cf2db569aac to disappear May 14 21:35:21.070: INFO: Pod pod-projected-configmaps-04ad87bd-9acf-49ef-92b2-5cf2db569aac no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:35:21.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2849" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1076,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:35:21.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 14 21:35:22.171: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 14 21:35:24.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088922, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088922, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088922, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088922, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 21:35:27.482: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:35:27.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:35:28.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5294" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.833 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":68,"skipped":1090,"failed":0} SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:35:28.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 14 21:35:28.992: INFO: PodSpec: initContainers in spec.initContainers May 14 21:36:16.117: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-915a19c2-d64d-461e-b6fa-3e5ca264f7cb", GenerateName:"", Namespace:"init-container-6504", SelfLink:"/api/v1/namespaces/init-container-6504/pods/pod-init-915a19c2-d64d-461e-b6fa-3e5ca264f7cb", UID:"3dbc9cde-02e9-454d-bfb4-8dabb2e6b2c2", ResourceVersion:"16206927", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725088929, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"992603754"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-77wdr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0029b8e40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-77wdr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-77wdr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-77wdr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc005bd1f58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002971560), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005bd1fe0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003044000)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003044008), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00304400c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088929, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088929, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088929, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725088929, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.10", PodIP:"10.244.1.57", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.57"}}, StartTime:(*v1.Time)(0xc0032a7d60), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0032a7da0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0021ebf80)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://cade7319869a94afa17fa34bf0de85bab7e7a78510db68a3a4461cce176a8a82", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0032a7dc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0032a7d80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00304409f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:36:16.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6504" for this suite. • [SLOW TEST:47.363 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":69,"skipped":1098,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:36:16.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-jdmhn in namespace proxy-2446 I0514 21:36:16.563411 6 runners.go:189] Created replication controller with name: proxy-service-jdmhn, namespace: proxy-2446, replica count: 1 I0514 21:36:17.613814 6 runners.go:189] proxy-service-jdmhn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 21:36:18.614048 6 runners.go:189] proxy-service-jdmhn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 21:36:19.614315 6 runners.go:189] proxy-service-jdmhn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 21:36:20.614516 6 runners.go:189] proxy-service-jdmhn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0514 21:36:21.614693 6 runners.go:189] proxy-service-jdmhn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0514 21:36:22.614907 6 runners.go:189] proxy-service-jdmhn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0514 21:36:23.615109 6 runners.go:189] proxy-service-jdmhn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0514 21:36:24.615292 6 runners.go:189] proxy-service-jdmhn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0514 21:36:25.615488 6 runners.go:189] proxy-service-jdmhn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0514 21:36:26.615639 6 runners.go:189] proxy-service-jdmhn Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 14 21:36:26.618: INFO: setup took 10.110212908s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 14 21:36:26.623: INFO: (0) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 4.70342ms) May 14 21:36:26.634: INFO: (0) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname2/proxy/: bar (200; 15.242273ms) May 14 21:36:26.634: INFO: (0) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg/proxy/: test (200; 15.600828ms) May 14 21:36:26.634: INFO: (0) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 15.36839ms) May 14 21:36:26.635: INFO: (0) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 15.98676ms) May 14 21:36:26.635: INFO: (0) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:1080/proxy/: ... (200; 16.214119ms) May 14 21:36:26.636: INFO: (0) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname1/proxy/: foo (200; 17.543363ms) May 14 21:36:26.637: INFO: (0) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname1/proxy/: foo (200; 18.110159ms) May 14 21:36:26.637: INFO: (0) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 18.494379ms) May 14 21:36:26.637: INFO: (0) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:1080/proxy/: test<... (200; 18.388667ms) May 14 21:36:26.638: INFO: (0) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname2/proxy/: bar (200; 19.039757ms) May 14 21:36:26.643: INFO: (0) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:460/proxy/: tls baz (200; 24.317306ms) May 14 21:36:26.643: INFO: (0) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname1/proxy/: tls baz (200; 24.469573ms) May 14 21:36:26.644: INFO: (0) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname2/proxy/: tls qux (200; 25.825853ms) May 14 21:36:26.644: INFO: (0) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:462/proxy/: tls qux (200; 25.731249ms) May 14 21:36:26.646: INFO: (0) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:443/proxy/: ... (200; 3.554737ms) May 14 21:36:26.650: INFO: (1) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:1080/proxy/: test<... (200; 3.805461ms) May 14 21:36:26.651: INFO: (1) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 4.020965ms) May 14 21:36:26.651: INFO: (1) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg/proxy/: test (200; 4.083681ms) May 14 21:36:26.651: INFO: (1) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 4.195703ms) May 14 21:36:26.651: INFO: (1) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname2/proxy/: bar (200; 4.399907ms) May 14 21:36:26.651: INFO: (1) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:462/proxy/: tls qux (200; 4.507512ms) May 14 21:36:26.651: INFO: (1) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname1/proxy/: foo (200; 4.671945ms) May 14 21:36:26.651: INFO: (1) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname2/proxy/: bar (200; 4.688525ms) May 14 21:36:26.651: INFO: (1) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname2/proxy/: tls qux (200; 4.685202ms) May 14 21:36:26.651: INFO: (1) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname1/proxy/: foo (200; 4.739067ms) May 14 21:36:26.651: INFO: (1) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname1/proxy/: tls baz (200; 4.789555ms) May 14 21:36:26.654: INFO: (2) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:462/proxy/: tls qux (200; 2.63319ms) May 14 21:36:26.655: INFO: (2) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:1080/proxy/: test<... (200; 3.133054ms) May 14 21:36:26.655: INFO: (2) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 3.561149ms) May 14 21:36:26.655: INFO: (2) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg/proxy/: test (200; 3.513511ms) May 14 21:36:26.655: INFO: (2) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 3.000458ms) May 14 21:36:26.655: INFO: (2) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 3.477034ms) May 14 21:36:26.655: INFO: (2) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:1080/proxy/: ... (200; 3.05785ms) May 14 21:36:26.655: INFO: (2) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:460/proxy/: tls baz (200; 3.434537ms) May 14 21:36:26.656: INFO: (2) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:443/proxy/: test (200; 3.302076ms) May 14 21:36:26.661: INFO: (3) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 3.474512ms) May 14 21:36:26.661: INFO: (3) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:460/proxy/: tls baz (200; 3.520885ms) May 14 21:36:26.661: INFO: (3) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:1080/proxy/: test<... (200; 3.687654ms) May 14 21:36:26.662: INFO: (3) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:443/proxy/: ... (200; 4.26693ms) May 14 21:36:26.662: INFO: (3) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname2/proxy/: tls qux (200; 4.295669ms) May 14 21:36:26.662: INFO: (3) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 4.669497ms) May 14 21:36:26.663: INFO: (3) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname2/proxy/: bar (200; 5.378954ms) May 14 21:36:26.663: INFO: (3) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname2/proxy/: bar (200; 5.439804ms) May 14 21:36:26.663: INFO: (3) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname1/proxy/: foo (200; 5.639039ms) May 14 21:36:26.663: INFO: (3) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname1/proxy/: tls baz (200; 5.695088ms) May 14 21:36:26.663: INFO: (3) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname1/proxy/: foo (200; 5.680274ms) May 14 21:36:26.666: INFO: (4) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 2.450994ms) May 14 21:36:26.667: INFO: (4) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 3.199368ms) May 14 21:36:26.667: INFO: (4) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg/proxy/: test (200; 3.282448ms) May 14 21:36:26.667: INFO: (4) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:1080/proxy/: ... (200; 3.30432ms) May 14 21:36:26.667: INFO: (4) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 3.42108ms) May 14 21:36:26.667: INFO: (4) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:460/proxy/: tls baz (200; 3.454286ms) May 14 21:36:26.667: INFO: (4) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 3.395218ms) May 14 21:36:26.667: INFO: (4) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:443/proxy/: test<... (200; 3.69953ms) May 14 21:36:26.668: INFO: (4) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname1/proxy/: foo (200; 4.040858ms) May 14 21:36:26.668: INFO: (4) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname2/proxy/: bar (200; 4.148127ms) May 14 21:36:26.668: INFO: (4) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname2/proxy/: bar (200; 4.114118ms) May 14 21:36:26.668: INFO: (4) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:462/proxy/: tls qux (200; 4.103721ms) May 14 21:36:26.668: INFO: (4) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname1/proxy/: foo (200; 4.139013ms) May 14 21:36:26.668: INFO: (4) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname2/proxy/: tls qux (200; 4.112167ms) May 14 21:36:26.668: INFO: (4) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname1/proxy/: tls baz (200; 4.205212ms) May 14 21:36:26.669: INFO: (5) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 1.506465ms) May 14 21:36:26.670: INFO: (5) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 2.452759ms) May 14 21:36:26.670: INFO: (5) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:443/proxy/: test<... (200; 4.208379ms) May 14 21:36:26.672: INFO: (5) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:1080/proxy/: ... (200; 4.20665ms) May 14 21:36:26.672: INFO: (5) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg/proxy/: test (200; 4.222587ms) May 14 21:36:26.672: INFO: (5) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:462/proxy/: tls qux (200; 4.327259ms) May 14 21:36:26.672: INFO: (5) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname1/proxy/: foo (200; 4.432383ms) May 14 21:36:26.672: INFO: (5) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname2/proxy/: bar (200; 4.575884ms) May 14 21:36:26.672: INFO: (5) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname1/proxy/: foo (200; 4.61793ms) May 14 21:36:26.672: INFO: (5) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname2/proxy/: bar (200; 4.628809ms) May 14 21:36:26.673: INFO: (5) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname2/proxy/: tls qux (200; 4.722031ms) May 14 21:36:26.676: INFO: (6) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:1080/proxy/: ... (200; 3.609621ms) May 14 21:36:26.678: INFO: (6) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg/proxy/: test (200; 5.192485ms) May 14 21:36:26.678: INFO: (6) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:443/proxy/: test<... (200; 5.158206ms) May 14 21:36:26.678: INFO: (6) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname1/proxy/: foo (200; 5.15216ms) May 14 21:36:26.678: INFO: (6) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 5.211377ms) May 14 21:36:26.678: INFO: (6) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname1/proxy/: tls baz (200; 5.478038ms) May 14 21:36:26.678: INFO: (6) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname2/proxy/: bar (200; 5.492757ms) May 14 21:36:26.678: INFO: (6) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname2/proxy/: tls qux (200; 5.5194ms) May 14 21:36:26.681: INFO: (7) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg/proxy/: test (200; 2.837847ms) May 14 21:36:26.681: INFO: (7) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 3.010774ms) May 14 21:36:26.681: INFO: (7) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 3.172275ms) May 14 21:36:26.681: INFO: (7) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 3.23659ms) May 14 21:36:26.681: INFO: (7) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:1080/proxy/: ... (200; 3.196144ms) May 14 21:36:26.682: INFO: (7) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:460/proxy/: tls baz (200; 3.818525ms) May 14 21:36:26.682: INFO: (7) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 3.998999ms) May 14 21:36:26.682: INFO: (7) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:462/proxy/: tls qux (200; 4.206471ms) May 14 21:36:26.682: INFO: (7) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname1/proxy/: foo (200; 4.222429ms) May 14 21:36:26.683: INFO: (7) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname2/proxy/: bar (200; 4.820563ms) May 14 21:36:26.683: INFO: (7) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:1080/proxy/: test<... (200; 4.815096ms) May 14 21:36:26.683: INFO: (7) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname1/proxy/: foo (200; 4.817945ms) May 14 21:36:26.683: INFO: (7) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname1/proxy/: tls baz (200; 5.118036ms) May 14 21:36:26.683: INFO: (7) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname2/proxy/: bar (200; 5.086421ms) May 14 21:36:26.684: INFO: (7) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:443/proxy/: test (200; 3.091235ms) May 14 21:36:26.687: INFO: (8) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 3.395462ms) May 14 21:36:26.687: INFO: (8) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 3.600358ms) May 14 21:36:26.688: INFO: (8) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 4.583671ms) May 14 21:36:26.688: INFO: (8) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:443/proxy/: test<... (200; 5.124927ms) May 14 21:36:26.689: INFO: (8) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:1080/proxy/: ... (200; 5.088226ms) May 14 21:36:26.689: INFO: (8) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname1/proxy/: foo (200; 5.400885ms) May 14 21:36:26.689: INFO: (8) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname2/proxy/: bar (200; 5.404192ms) May 14 21:36:26.689: INFO: (8) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname2/proxy/: tls qux (200; 5.378288ms) May 14 21:36:26.689: INFO: (8) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname1/proxy/: foo (200; 5.478191ms) May 14 21:36:26.690: INFO: (8) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname2/proxy/: bar (200; 5.914854ms) May 14 21:36:26.693: INFO: (9) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 2.726635ms) May 14 21:36:26.693: INFO: (9) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 2.860207ms) May 14 21:36:26.693: INFO: (9) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:1080/proxy/: ... (200; 2.94594ms) May 14 21:36:26.693: INFO: (9) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:460/proxy/: tls baz (200; 2.945486ms) May 14 21:36:26.693: INFO: (9) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:443/proxy/: test<... (200; 10.722461ms) May 14 21:36:26.701: INFO: (9) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname2/proxy/: tls qux (200; 10.765927ms) May 14 21:36:26.701: INFO: (9) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname1/proxy/: foo (200; 10.820674ms) May 14 21:36:26.701: INFO: (9) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 10.827086ms) May 14 21:36:26.701: INFO: (9) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname1/proxy/: tls baz (200; 10.947705ms) May 14 21:36:26.701: INFO: (9) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname2/proxy/: bar (200; 10.937689ms) May 14 21:36:26.701: INFO: (9) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname1/proxy/: foo (200; 10.977121ms) May 14 21:36:26.701: INFO: (9) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:462/proxy/: tls qux (200; 10.996407ms) May 14 21:36:26.701: INFO: (9) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg/proxy/: test (200; 10.943813ms) May 14 21:36:26.704: INFO: (10) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:1080/proxy/: ... (200; 3.211766ms) May 14 21:36:26.704: INFO: (10) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 3.401664ms) May 14 21:36:26.704: INFO: (10) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:460/proxy/: tls baz (200; 3.444402ms) May 14 21:36:26.704: INFO: (10) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 3.452855ms) May 14 21:36:26.705: INFO: (10) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:462/proxy/: tls qux (200; 3.671822ms) May 14 21:36:26.705: INFO: (10) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg/proxy/: test (200; 4.000227ms) May 14 21:36:26.705: INFO: (10) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:443/proxy/: test<... (200; 4.144188ms) May 14 21:36:26.706: INFO: (10) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname2/proxy/: tls qux (200; 5.349789ms) May 14 21:36:26.706: INFO: (10) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname2/proxy/: bar (200; 5.339961ms) May 14 21:36:26.706: INFO: (10) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname1/proxy/: tls baz (200; 5.459981ms) May 14 21:36:26.706: INFO: (10) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname1/proxy/: foo (200; 5.464728ms) May 14 21:36:26.706: INFO: (10) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname2/proxy/: bar (200; 5.45696ms) May 14 21:36:26.707: INFO: (10) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname1/proxy/: foo (200; 5.71354ms) May 14 21:36:26.708: INFO: (11) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 1.837018ms) May 14 21:36:26.710: INFO: (11) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 3.222384ms) May 14 21:36:26.710: INFO: (11) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:462/proxy/: tls qux (200; 3.350953ms) May 14 21:36:26.710: INFO: (11) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 3.29288ms) May 14 21:36:26.710: INFO: (11) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:1080/proxy/: ... (200; 3.576439ms) May 14 21:36:26.710: INFO: (11) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname1/proxy/: foo (200; 3.730475ms) May 14 21:36:26.711: INFO: (11) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:460/proxy/: tls baz (200; 4.229852ms) May 14 21:36:26.712: INFO: (11) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 4.846607ms) May 14 21:36:26.712: INFO: (11) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname2/proxy/: tls qux (200; 4.949306ms) May 14 21:36:26.712: INFO: (11) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:1080/proxy/: test<... (200; 5.545836ms) May 14 21:36:26.712: INFO: (11) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg/proxy/: test (200; 5.568368ms) May 14 21:36:26.712: INFO: (11) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname1/proxy/: tls baz (200; 5.52016ms) May 14 21:36:26.712: INFO: (11) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname2/proxy/: bar (200; 5.545158ms) May 14 21:36:26.712: INFO: (11) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname1/proxy/: foo (200; 5.611333ms) May 14 21:36:26.712: INFO: (11) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname2/proxy/: bar (200; 5.607576ms) May 14 21:36:26.712: INFO: (11) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:443/proxy/: ... (200; 4.411109ms) May 14 21:36:26.717: INFO: (12) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg/proxy/: test (200; 4.399067ms) May 14 21:36:26.717: INFO: (12) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:460/proxy/: tls baz (200; 4.459279ms) May 14 21:36:26.717: INFO: (12) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 4.408306ms) May 14 21:36:26.717: INFO: (12) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:1080/proxy/: test<... (200; 4.414429ms) May 14 21:36:26.717: INFO: (12) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname1/proxy/: foo (200; 4.448176ms) May 14 21:36:26.720: INFO: (13) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 2.880207ms) May 14 21:36:26.720: INFO: (13) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:460/proxy/: tls baz (200; 2.885522ms) May 14 21:36:26.720: INFO: (13) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:443/proxy/: test (200; 2.919246ms) May 14 21:36:26.720: INFO: (13) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 2.951035ms) May 14 21:36:26.722: INFO: (13) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 4.605157ms) May 14 21:36:26.722: INFO: (13) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname2/proxy/: bar (200; 4.699553ms) May 14 21:36:26.722: INFO: (13) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname1/proxy/: foo (200; 4.702728ms) May 14 21:36:26.722: INFO: (13) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname2/proxy/: bar (200; 4.742079ms) May 14 21:36:26.722: INFO: (13) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname2/proxy/: tls qux (200; 4.702641ms) May 14 21:36:26.722: INFO: (13) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname1/proxy/: tls baz (200; 4.777967ms) May 14 21:36:26.722: INFO: (13) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:1080/proxy/: ... (200; 4.764872ms) May 14 21:36:26.722: INFO: (13) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 4.765467ms) May 14 21:36:26.722: INFO: (13) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:1080/proxy/: test<... (200; 4.783664ms) May 14 21:36:26.722: INFO: (13) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:462/proxy/: tls qux (200; 4.757199ms) May 14 21:36:26.722: INFO: (13) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname1/proxy/: foo (200; 4.831245ms) May 14 21:36:26.725: INFO: (14) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:443/proxy/: test<... (200; 3.96627ms) May 14 21:36:26.726: INFO: (14) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:1080/proxy/: ... (200; 3.960519ms) May 14 21:36:26.726: INFO: (14) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:462/proxy/: tls qux (200; 4.034808ms) May 14 21:36:26.726: INFO: (14) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 4.031146ms) May 14 21:36:26.726: INFO: (14) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname1/proxy/: foo (200; 4.077487ms) May 14 21:36:26.726: INFO: (14) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg/proxy/: test (200; 4.047298ms) May 14 21:36:26.726: INFO: (14) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:460/proxy/: tls baz (200; 3.982775ms) May 14 21:36:26.726: INFO: (14) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 4.028883ms) May 14 21:36:26.727: INFO: (14) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname2/proxy/: bar (200; 4.727801ms) May 14 21:36:26.727: INFO: (14) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname1/proxy/: tls baz (200; 4.935239ms) May 14 21:36:26.727: INFO: (14) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname2/proxy/: tls qux (200; 5.045677ms) May 14 21:36:26.727: INFO: (14) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname2/proxy/: bar (200; 5.046288ms) May 14 21:36:26.730: INFO: (15) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:443/proxy/: test (200; 4.525978ms) May 14 21:36:26.732: INFO: (15) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname1/proxy/: foo (200; 4.553372ms) May 14 21:36:26.732: INFO: (15) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname2/proxy/: tls qux (200; 4.577759ms) May 14 21:36:26.732: INFO: (15) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname1/proxy/: tls baz (200; 4.615874ms) May 14 21:36:26.732: INFO: (15) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:1080/proxy/: test<... (200; 4.621909ms) May 14 21:36:26.732: INFO: (15) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 4.616862ms) May 14 21:36:26.732: INFO: (15) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname2/proxy/: bar (200; 4.619879ms) May 14 21:36:26.732: INFO: (15) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 4.696525ms) May 14 21:36:26.732: INFO: (15) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:1080/proxy/: ... (200; 4.628583ms) May 14 21:36:26.732: INFO: (15) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 4.67742ms) May 14 21:36:26.732: INFO: (15) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname1/proxy/: foo (200; 4.633586ms) May 14 21:36:26.732: INFO: (15) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:462/proxy/: tls qux (200; 4.696064ms) May 14 21:36:26.732: INFO: (15) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:460/proxy/: tls baz (200; 4.681663ms) May 14 21:36:26.732: INFO: (15) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 4.784597ms) May 14 21:36:26.732: INFO: (15) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname2/proxy/: bar (200; 4.923497ms) May 14 21:36:26.735: INFO: (16) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 2.570422ms) May 14 21:36:26.736: INFO: (16) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 3.515809ms) May 14 21:36:26.736: INFO: (16) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:1080/proxy/: test<... (200; 3.44347ms) May 14 21:36:26.736: INFO: (16) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg/proxy/: test (200; 3.494899ms) May 14 21:36:26.736: INFO: (16) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 3.550543ms) May 14 21:36:26.736: INFO: (16) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 3.488618ms) May 14 21:36:26.736: INFO: (16) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname1/proxy/: foo (200; 3.572026ms) May 14 21:36:26.736: INFO: (16) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:462/proxy/: tls qux (200; 3.612292ms) May 14 21:36:26.736: INFO: (16) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname2/proxy/: bar (200; 3.684647ms) May 14 21:36:26.736: INFO: (16) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname1/proxy/: tls baz (200; 3.686151ms) May 14 21:36:26.736: INFO: (16) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:460/proxy/: tls baz (200; 3.733781ms) May 14 21:36:26.736: INFO: (16) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname2/proxy/: tls qux (200; 3.633233ms) May 14 21:36:26.736: INFO: (16) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:443/proxy/: ... (200; 3.315842ms) May 14 21:36:26.737: INFO: (16) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname1/proxy/: foo (200; 4.7622ms) May 14 21:36:26.737: INFO: (16) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname2/proxy/: bar (200; 4.875427ms) May 14 21:36:26.740: INFO: (17) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 2.849898ms) May 14 21:36:26.740: INFO: (17) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:1080/proxy/: ... (200; 2.643247ms) May 14 21:36:26.740: INFO: (17) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:462/proxy/: tls qux (200; 2.748075ms) May 14 21:36:26.740: INFO: (17) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 3.169723ms) May 14 21:36:26.740: INFO: (17) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 3.289612ms) May 14 21:36:26.740: INFO: (17) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:443/proxy/: test (200; 3.601928ms) May 14 21:36:26.743: INFO: (17) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname2/proxy/: bar (200; 5.61315ms) May 14 21:36:26.743: INFO: (17) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname1/proxy/: foo (200; 5.967856ms) May 14 21:36:26.743: INFO: (17) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname2/proxy/: tls qux (200; 5.84925ms) May 14 21:36:26.743: INFO: (17) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:1080/proxy/: test<... (200; 6.104142ms) May 14 21:36:26.743: INFO: (17) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname1/proxy/: tls baz (200; 5.966696ms) May 14 21:36:26.744: INFO: (17) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname1/proxy/: foo (200; 6.755413ms) May 14 21:36:26.744: INFO: (17) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname2/proxy/: bar (200; 6.838696ms) May 14 21:36:26.748: INFO: (18) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname2/proxy/: bar (200; 4.161488ms) May 14 21:36:26.748: INFO: (18) /api/v1/namespaces/proxy-2446/services/http:proxy-service-jdmhn:portname2/proxy/: bar (200; 4.215119ms) May 14 21:36:26.749: INFO: (18) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 4.98902ms) May 14 21:36:26.749: INFO: (18) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 4.991969ms) May 14 21:36:26.749: INFO: (18) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname1/proxy/: foo (200; 5.265275ms) May 14 21:36:26.750: INFO: (18) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg/proxy/: test (200; 5.598411ms) May 14 21:36:26.750: INFO: (18) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:462/proxy/: tls qux (200; 5.628209ms) May 14 21:36:26.750: INFO: (18) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:1080/proxy/: test<... (200; 5.598662ms) May 14 21:36:26.750: INFO: (18) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:1080/proxy/: ... (200; 5.551194ms) May 14 21:36:26.750: INFO: (18) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 5.624745ms) May 14 21:36:26.750: INFO: (18) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname2/proxy/: tls qux (200; 5.605534ms) May 14 21:36:26.750: INFO: (18) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname1/proxy/: tls baz (200; 5.637574ms) May 14 21:36:26.750: INFO: (18) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:443/proxy/: test (200; 2.991662ms) May 14 21:36:26.753: INFO: (19) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:460/proxy/: tls baz (200; 2.995358ms) May 14 21:36:26.753: INFO: (19) /api/v1/namespaces/proxy-2446/services/https:proxy-service-jdmhn:tlsportname1/proxy/: tls baz (200; 3.236913ms) May 14 21:36:26.754: INFO: (19) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 3.479645ms) May 14 21:36:26.754: INFO: (19) /api/v1/namespaces/proxy-2446/services/proxy-service-jdmhn:portname1/proxy/: foo (200; 3.755161ms) May 14 21:36:26.754: INFO: (19) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 3.756408ms) May 14 21:36:26.754: INFO: (19) /api/v1/namespaces/proxy-2446/pods/proxy-service-jdmhn-nlngg:1080/proxy/: test<... (200; 3.829423ms) May 14 21:36:26.754: INFO: (19) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:160/proxy/: foo (200; 4.035402ms) May 14 21:36:26.754: INFO: (19) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:162/proxy/: bar (200; 4.099443ms) May 14 21:36:26.754: INFO: (19) /api/v1/namespaces/proxy-2446/pods/http:proxy-service-jdmhn-nlngg:1080/proxy/: ... (200; 4.10106ms) May 14 21:36:26.754: INFO: (19) /api/v1/namespaces/proxy-2446/pods/https:proxy-service-jdmhn-nlngg:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:36:39.383: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6239 I0514 21:36:39.396078 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6239, replica count: 1 I0514 21:36:40.446382 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 21:36:41.446618 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 21:36:42.446887 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 21:36:43.447237 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 14 21:36:43.577: INFO: Created: latency-svc-gxqcp May 14 21:36:43.595: INFO: Got endpoints: latency-svc-gxqcp [48.088712ms] May 14 21:36:43.661: INFO: Created: latency-svc-kn5mc May 14 21:36:43.664: INFO: Got endpoints: latency-svc-kn5mc [69.182132ms] May 14 21:36:43.694: INFO: Created: latency-svc-whf5w May 14 21:36:43.707: INFO: Got endpoints: latency-svc-whf5w [111.770165ms] May 14 21:36:43.735: INFO: Created: latency-svc-zshw5 May 14 21:36:43.749: INFO: Got endpoints: latency-svc-zshw5 [154.116777ms] May 14 21:36:43.799: INFO: Created: latency-svc-2kfp7 May 14 21:36:43.835: INFO: Got endpoints: latency-svc-2kfp7 [239.508859ms] May 14 21:36:43.835: INFO: Created: latency-svc-86zjq May 14 21:36:43.852: INFO: Got endpoints: latency-svc-86zjq [256.389105ms] May 14 21:36:43.877: INFO: Created: latency-svc-sw5fc May 14 21:36:43.894: INFO: Got endpoints: latency-svc-sw5fc [298.520496ms] May 14 21:36:43.943: INFO: Created: latency-svc-v7ddn May 14 21:36:43.945: INFO: Got endpoints: latency-svc-v7ddn [349.698088ms] May 14 21:36:43.971: INFO: Created: latency-svc-ktvp8 May 14 21:36:43.992: INFO: Got endpoints: latency-svc-ktvp8 [396.638634ms] May 14 21:36:44.011: INFO: Created: latency-svc-flkj6 May 14 21:36:44.104: INFO: Got endpoints: latency-svc-flkj6 [508.278351ms] May 14 21:36:44.113: INFO: Created: latency-svc-vr6wm May 14 21:36:44.115: INFO: Got endpoints: latency-svc-vr6wm [519.114428ms] May 14 21:36:44.179: INFO: Created: latency-svc-fgtcv May 14 21:36:44.198: INFO: Got endpoints: latency-svc-fgtcv [601.932476ms] May 14 21:36:44.242: INFO: Created: latency-svc-zzqv7 May 14 21:36:44.245: INFO: Got endpoints: latency-svc-zzqv7 [649.81932ms] May 14 21:36:44.302: INFO: Created: latency-svc-5fn8w May 14 21:36:44.318: INFO: Got endpoints: latency-svc-5fn8w [723.03416ms] May 14 21:36:44.403: INFO: Created: latency-svc-jgz4m May 14 21:36:44.410: INFO: Got endpoints: latency-svc-jgz4m [813.879799ms] May 14 21:36:44.482: INFO: Created: latency-svc-2xhk7 May 14 21:36:44.492: INFO: Got endpoints: latency-svc-2xhk7 [896.069208ms] May 14 21:36:44.536: INFO: Created: latency-svc-pcffg May 14 21:36:44.593: INFO: Got endpoints: latency-svc-pcffg [929.042308ms] May 14 21:36:44.623: INFO: Created: latency-svc-t9p9n May 14 21:36:44.687: INFO: Got endpoints: latency-svc-t9p9n [979.52632ms] May 14 21:36:44.692: INFO: Created: latency-svc-c5j2h May 14 21:36:44.719: INFO: Got endpoints: latency-svc-c5j2h [968.950532ms] May 14 21:36:44.759: INFO: Created: latency-svc-qt62f May 14 21:36:44.828: INFO: Got endpoints: latency-svc-qt62f [992.831044ms] May 14 21:36:44.830: INFO: Created: latency-svc-ptlxx May 14 21:36:44.839: INFO: Got endpoints: latency-svc-ptlxx [987.185121ms] May 14 21:36:44.864: INFO: Created: latency-svc-28r86 May 14 21:36:44.876: INFO: Got endpoints: latency-svc-28r86 [981.502637ms] May 14 21:36:44.896: INFO: Created: latency-svc-z58dz May 14 21:36:44.911: INFO: Got endpoints: latency-svc-z58dz [966.138549ms] May 14 21:36:44.959: INFO: Created: latency-svc-47z85 May 14 21:36:44.990: INFO: Got endpoints: latency-svc-47z85 [997.595881ms] May 14 21:36:44.990: INFO: Created: latency-svc-7gsck May 14 21:36:45.002: INFO: Got endpoints: latency-svc-7gsck [898.010242ms] May 14 21:36:45.104: INFO: Created: latency-svc-zp7jc May 14 21:36:45.116: INFO: Got endpoints: latency-svc-zp7jc [1.001170519s] May 14 21:36:45.155: INFO: Created: latency-svc-vmm9c May 14 21:36:45.171: INFO: Got endpoints: latency-svc-vmm9c [973.250315ms] May 14 21:36:45.199: INFO: Created: latency-svc-2j2rj May 14 21:36:45.229: INFO: Got endpoints: latency-svc-2j2rj [983.591238ms] May 14 21:36:45.253: INFO: Created: latency-svc-r2nx5 May 14 21:36:45.267: INFO: Got endpoints: latency-svc-r2nx5 [948.246844ms] May 14 21:36:45.289: INFO: Created: latency-svc-9t8qr May 14 21:36:45.303: INFO: Got endpoints: latency-svc-9t8qr [893.671553ms] May 14 21:36:45.379: INFO: Created: latency-svc-gvl55 May 14 21:36:45.697: INFO: Got endpoints: latency-svc-gvl55 [1.205257174s] May 14 21:36:45.727: INFO: Created: latency-svc-j8tjh May 14 21:36:45.864: INFO: Got endpoints: latency-svc-j8tjh [1.270274872s] May 14 21:36:45.891: INFO: Created: latency-svc-dv4t7 May 14 21:36:45.909: INFO: Got endpoints: latency-svc-dv4t7 [1.222601693s] May 14 21:36:45.961: INFO: Created: latency-svc-pvzj9 May 14 21:36:46.008: INFO: Got endpoints: latency-svc-pvzj9 [1.289243972s] May 14 21:36:46.066: INFO: Created: latency-svc-x9ltm May 14 21:36:46.083: INFO: Got endpoints: latency-svc-x9ltm [1.255434916s] May 14 21:36:46.159: INFO: Created: latency-svc-8xvw4 May 14 21:36:46.195: INFO: Got endpoints: latency-svc-8xvw4 [1.35568728s] May 14 21:36:46.195: INFO: Created: latency-svc-bpb6m May 14 21:36:46.210: INFO: Got endpoints: latency-svc-bpb6m [1.334233745s] May 14 21:36:46.237: INFO: Created: latency-svc-bf6xn May 14 21:36:46.252: INFO: Got endpoints: latency-svc-bf6xn [1.340776162s] May 14 21:36:46.313: INFO: Created: latency-svc-ntzng May 14 21:36:46.330: INFO: Got endpoints: latency-svc-ntzng [1.339905321s] May 14 21:36:46.361: INFO: Created: latency-svc-zq8k7 May 14 21:36:46.373: INFO: Got endpoints: latency-svc-zq8k7 [1.370642532s] May 14 21:36:46.392: INFO: Created: latency-svc-lks8s May 14 21:36:46.410: INFO: Got endpoints: latency-svc-lks8s [1.293473622s] May 14 21:36:46.469: INFO: Created: latency-svc-k86c8 May 14 21:36:46.476: INFO: Got endpoints: latency-svc-k86c8 [1.304740987s] May 14 21:36:46.515: INFO: Created: latency-svc-99nl8 May 14 21:36:46.530: INFO: Got endpoints: latency-svc-99nl8 [1.301027109s] May 14 21:36:46.607: INFO: Created: latency-svc-qn9fc May 14 21:36:46.611: INFO: Got endpoints: latency-svc-qn9fc [1.34367438s] May 14 21:36:46.633: INFO: Created: latency-svc-lqpj2 May 14 21:36:46.644: INFO: Got endpoints: latency-svc-lqpj2 [1.340764935s] May 14 21:36:46.681: INFO: Created: latency-svc-gc2xq May 14 21:36:46.756: INFO: Got endpoints: latency-svc-gc2xq [1.059133827s] May 14 21:36:46.800: INFO: Created: latency-svc-wqscc May 14 21:36:46.819: INFO: Got endpoints: latency-svc-wqscc [954.727358ms] May 14 21:36:46.843: INFO: Created: latency-svc-8hr8c May 14 21:36:46.855: INFO: Got endpoints: latency-svc-8hr8c [945.146385ms] May 14 21:36:46.901: INFO: Created: latency-svc-mshw6 May 14 21:36:46.917: INFO: Got endpoints: latency-svc-mshw6 [908.947487ms] May 14 21:36:46.960: INFO: Created: latency-svc-mz92l May 14 21:36:46.975: INFO: Got endpoints: latency-svc-mz92l [891.719191ms] May 14 21:36:46.999: INFO: Created: latency-svc-t7trd May 14 21:36:47.038: INFO: Got endpoints: latency-svc-t7trd [842.66844ms] May 14 21:36:47.053: INFO: Created: latency-svc-86d2p May 14 21:36:47.066: INFO: Got endpoints: latency-svc-86d2p [856.23408ms] May 14 21:36:47.089: INFO: Created: latency-svc-5hslw May 14 21:36:47.102: INFO: Got endpoints: latency-svc-5hslw [849.588785ms] May 14 21:36:47.134: INFO: Created: latency-svc-hhw6s May 14 21:36:47.229: INFO: Got endpoints: latency-svc-hhw6s [899.403216ms] May 14 21:36:47.231: INFO: Created: latency-svc-k5gsr May 14 21:36:47.250: INFO: Got endpoints: latency-svc-k5gsr [877.148494ms] May 14 21:36:47.281: INFO: Created: latency-svc-7rtl7 May 14 21:36:47.294: INFO: Got endpoints: latency-svc-7rtl7 [884.76327ms] May 14 21:36:47.373: INFO: Created: latency-svc-4lghs May 14 21:36:47.376: INFO: Got endpoints: latency-svc-4lghs [900.331032ms] May 14 21:36:47.404: INFO: Created: latency-svc-z9r7l May 14 21:36:47.423: INFO: Got endpoints: latency-svc-z9r7l [892.290193ms] May 14 21:36:47.473: INFO: Created: latency-svc-pgcxd May 14 21:36:47.499: INFO: Got endpoints: latency-svc-pgcxd [887.919849ms] May 14 21:36:47.547: INFO: Created: latency-svc-wl48w May 14 21:36:47.560: INFO: Got endpoints: latency-svc-wl48w [915.733811ms] May 14 21:36:47.646: INFO: Created: latency-svc-z8sj6 May 14 21:36:47.670: INFO: Created: latency-svc-zstnn May 14 21:36:47.670: INFO: Got endpoints: latency-svc-z8sj6 [914.070268ms] May 14 21:36:47.687: INFO: Got endpoints: latency-svc-zstnn [868.196705ms] May 14 21:36:47.714: INFO: Created: latency-svc-v8rzj May 14 21:36:47.723: INFO: Got endpoints: latency-svc-v8rzj [868.394194ms] May 14 21:36:47.780: INFO: Created: latency-svc-mt92w May 14 21:36:47.783: INFO: Got endpoints: latency-svc-mt92w [866.361478ms] May 14 21:36:47.811: INFO: Created: latency-svc-75lxw May 14 21:36:47.841: INFO: Got endpoints: latency-svc-75lxw [866.286619ms] May 14 21:36:47.874: INFO: Created: latency-svc-bh25c May 14 21:36:47.912: INFO: Got endpoints: latency-svc-bh25c [874.043778ms] May 14 21:36:47.947: INFO: Created: latency-svc-9hf45 May 14 21:36:47.958: INFO: Got endpoints: latency-svc-9hf45 [892.257988ms] May 14 21:36:48.056: INFO: Created: latency-svc-fvhgt May 14 21:36:48.090: INFO: Got endpoints: latency-svc-fvhgt [988.416908ms] May 14 21:36:48.132: INFO: Created: latency-svc-hdtjf May 14 21:36:48.144: INFO: Got endpoints: latency-svc-hdtjf [914.554626ms] May 14 21:36:48.200: INFO: Created: latency-svc-f8ph7 May 14 21:36:48.204: INFO: Got endpoints: latency-svc-f8ph7 [953.926963ms] May 14 21:36:48.232: INFO: Created: latency-svc-zlg5v May 14 21:36:48.246: INFO: Got endpoints: latency-svc-zlg5v [952.024626ms] May 14 21:36:48.291: INFO: Created: latency-svc-bprrr May 14 21:36:48.340: INFO: Got endpoints: latency-svc-bprrr [963.784127ms] May 14 21:36:48.358: INFO: Created: latency-svc-nks8j May 14 21:36:48.396: INFO: Got endpoints: latency-svc-nks8j [973.649416ms] May 14 21:36:48.397: INFO: Created: latency-svc-kz2zv May 14 21:36:48.469: INFO: Got endpoints: latency-svc-kz2zv [970.660992ms] May 14 21:36:48.489: INFO: Created: latency-svc-46vnw May 14 21:36:48.528: INFO: Got endpoints: latency-svc-46vnw [968.106435ms] May 14 21:36:48.655: INFO: Created: latency-svc-bqqz8 May 14 21:36:48.663: INFO: Got endpoints: latency-svc-bqqz8 [992.474438ms] May 14 21:36:48.720: INFO: Created: latency-svc-7ppjb May 14 21:36:48.740: INFO: Got endpoints: latency-svc-7ppjb [1.052832788s] May 14 21:36:48.807: INFO: Created: latency-svc-f8kqm May 14 21:36:48.824: INFO: Got endpoints: latency-svc-f8kqm [1.100936689s] May 14 21:36:48.849: INFO: Created: latency-svc-gbz5t May 14 21:36:48.866: INFO: Got endpoints: latency-svc-gbz5t [1.082961791s] May 14 21:36:48.924: INFO: Created: latency-svc-jz8tt May 14 21:36:48.932: INFO: Got endpoints: latency-svc-jz8tt [1.091043634s] May 14 21:36:48.954: INFO: Created: latency-svc-jlxmm May 14 21:36:48.969: INFO: Got endpoints: latency-svc-jlxmm [1.057142697s] May 14 21:36:48.990: INFO: Created: latency-svc-ns822 May 14 21:36:48.999: INFO: Got endpoints: latency-svc-ns822 [1.0403856s] May 14 21:36:49.023: INFO: Created: latency-svc-t9sbn May 14 21:36:49.091: INFO: Got endpoints: latency-svc-t9sbn [1.000686622s] May 14 21:36:49.095: INFO: Created: latency-svc-grdvh May 14 21:36:49.107: INFO: Got endpoints: latency-svc-grdvh [963.600699ms] May 14 21:36:49.134: INFO: Created: latency-svc-8wkxb May 14 21:36:49.150: INFO: Got endpoints: latency-svc-8wkxb [945.687893ms] May 14 21:36:49.170: INFO: Created: latency-svc-k2tvh May 14 21:36:49.223: INFO: Got endpoints: latency-svc-k2tvh [976.434163ms] May 14 21:36:49.236: INFO: Created: latency-svc-m7mps May 14 21:36:49.253: INFO: Got endpoints: latency-svc-m7mps [913.240104ms] May 14 21:36:49.277: INFO: Created: latency-svc-lf8pr May 14 21:36:49.311: INFO: Got endpoints: latency-svc-lf8pr [914.48872ms] May 14 21:36:49.367: INFO: Created: latency-svc-rvrst May 14 21:36:49.371: INFO: Got endpoints: latency-svc-rvrst [901.903889ms] May 14 21:36:49.392: INFO: Created: latency-svc-kb2fg May 14 21:36:49.403: INFO: Got endpoints: latency-svc-kb2fg [875.293284ms] May 14 21:36:49.440: INFO: Created: latency-svc-mwgbd May 14 21:36:49.467: INFO: Got endpoints: latency-svc-mwgbd [803.636609ms] May 14 21:36:49.526: INFO: Created: latency-svc-nbw97 May 14 21:36:49.530: INFO: Got endpoints: latency-svc-nbw97 [789.974416ms] May 14 21:36:49.551: INFO: Created: latency-svc-nfqqm May 14 21:36:49.566: INFO: Got endpoints: latency-svc-nfqqm [742.047211ms] May 14 21:36:49.592: INFO: Created: latency-svc-hnb58 May 14 21:36:49.615: INFO: Got endpoints: latency-svc-hnb58 [748.260483ms] May 14 21:36:49.667: INFO: Created: latency-svc-qx7t5 May 14 21:36:49.670: INFO: Got endpoints: latency-svc-qx7t5 [737.273948ms] May 14 21:36:49.695: INFO: Created: latency-svc-586cs May 14 21:36:49.711: INFO: Got endpoints: latency-svc-586cs [742.01366ms] May 14 21:36:49.729: INFO: Created: latency-svc-5zzls May 14 21:36:49.752: INFO: Got endpoints: latency-svc-5zzls [753.201406ms] May 14 21:36:49.834: INFO: Created: latency-svc-dg84s May 14 21:36:49.838: INFO: Got endpoints: latency-svc-dg84s [746.65181ms] May 14 21:36:49.863: INFO: Created: latency-svc-fq26v May 14 21:36:49.880: INFO: Got endpoints: latency-svc-fq26v [772.240576ms] May 14 21:36:49.899: INFO: Created: latency-svc-ns4cj May 14 21:36:49.929: INFO: Got endpoints: latency-svc-ns4cj [779.362798ms] May 14 21:36:49.987: INFO: Created: latency-svc-jlt2w May 14 21:36:49.989: INFO: Got endpoints: latency-svc-jlt2w [766.557131ms] May 14 21:36:50.022: INFO: Created: latency-svc-26sbr May 14 21:36:50.036: INFO: Got endpoints: latency-svc-26sbr [782.953946ms] May 14 21:36:50.061: INFO: Created: latency-svc-7lgcz May 14 21:36:50.078: INFO: Got endpoints: latency-svc-7lgcz [767.504975ms] May 14 21:36:50.133: INFO: Created: latency-svc-v4lw5 May 14 21:36:50.151: INFO: Got endpoints: latency-svc-v4lw5 [780.150967ms] May 14 21:36:50.178: INFO: Created: latency-svc-pc5jb May 14 21:36:50.199: INFO: Got endpoints: latency-svc-pc5jb [795.995554ms] May 14 21:36:50.221: INFO: Created: latency-svc-f6sq9 May 14 21:36:50.265: INFO: Got endpoints: latency-svc-f6sq9 [798.524966ms] May 14 21:36:50.295: INFO: Created: latency-svc-4k7pc May 14 21:36:50.314: INFO: Got endpoints: latency-svc-4k7pc [784.246346ms] May 14 21:36:50.346: INFO: Created: latency-svc-njpl8 May 14 21:36:50.417: INFO: Got endpoints: latency-svc-njpl8 [850.892194ms] May 14 21:36:50.463: INFO: Created: latency-svc-d494n May 14 21:36:50.503: INFO: Got endpoints: latency-svc-d494n [887.69852ms] May 14 21:36:50.624: INFO: Created: latency-svc-s9dbq May 14 21:36:50.639: INFO: Got endpoints: latency-svc-s9dbq [969.675265ms] May 14 21:36:50.692: INFO: Created: latency-svc-8vss4 May 14 21:36:50.792: INFO: Got endpoints: latency-svc-8vss4 [1.081126178s] May 14 21:36:50.797: INFO: Created: latency-svc-mtqqj May 14 21:36:50.826: INFO: Got endpoints: latency-svc-mtqqj [1.073977751s] May 14 21:36:50.884: INFO: Created: latency-svc-pzllj May 14 21:36:50.924: INFO: Got endpoints: latency-svc-pzllj [1.086114454s] May 14 21:36:50.934: INFO: Created: latency-svc-ckcjn May 14 21:36:50.955: INFO: Got endpoints: latency-svc-ckcjn [1.075090555s] May 14 21:36:50.976: INFO: Created: latency-svc-ttkq5 May 14 21:36:51.009: INFO: Got endpoints: latency-svc-ttkq5 [1.079709461s] May 14 21:36:51.135: INFO: Created: latency-svc-dbfj4 May 14 21:36:51.157: INFO: Got endpoints: latency-svc-dbfj4 [1.16703613s] May 14 21:36:51.223: INFO: Created: latency-svc-srcmb May 14 21:36:51.241: INFO: Got endpoints: latency-svc-srcmb [1.204309982s] May 14 21:36:51.277: INFO: Created: latency-svc-j4ggw May 14 21:36:51.295: INFO: Got endpoints: latency-svc-j4ggw [1.216631122s] May 14 21:36:51.322: INFO: Created: latency-svc-kjcvb May 14 21:36:51.409: INFO: Got endpoints: latency-svc-kjcvb [1.257432848s] May 14 21:36:51.411: INFO: Created: latency-svc-6r7n5 May 14 21:36:51.430: INFO: Got endpoints: latency-svc-6r7n5 [1.230408753s] May 14 21:36:51.469: INFO: Created: latency-svc-xt5kt May 14 21:36:51.482: INFO: Got endpoints: latency-svc-xt5kt [1.216478453s] May 14 21:36:51.502: INFO: Created: latency-svc-qck97 May 14 21:36:51.553: INFO: Got endpoints: latency-svc-qck97 [1.238493708s] May 14 21:36:51.555: INFO: Created: latency-svc-p664m May 14 21:36:51.585: INFO: Got endpoints: latency-svc-p664m [1.168429019s] May 14 21:36:51.620: INFO: Created: latency-svc-mtktm May 14 21:36:51.639: INFO: Got endpoints: latency-svc-mtktm [1.136778776s] May 14 21:36:51.715: INFO: Created: latency-svc-4qgh2 May 14 21:36:51.729: INFO: Got endpoints: latency-svc-4qgh2 [1.089527943s] May 14 21:36:51.765: INFO: Created: latency-svc-wgtvr May 14 21:36:51.807: INFO: Got endpoints: latency-svc-wgtvr [1.01529354s] May 14 21:36:51.859: INFO: Created: latency-svc-kktmr May 14 21:36:51.873: INFO: Got endpoints: latency-svc-kktmr [1.046854915s] May 14 21:36:51.901: INFO: Created: latency-svc-zpkch May 14 21:36:51.915: INFO: Got endpoints: latency-svc-zpkch [991.36873ms] May 14 21:36:51.937: INFO: Created: latency-svc-hbc5f May 14 21:36:51.952: INFO: Got endpoints: latency-svc-hbc5f [996.805075ms] May 14 21:36:52.000: INFO: Created: latency-svc-t525d May 14 21:36:52.006: INFO: Got endpoints: latency-svc-t525d [996.980778ms] May 14 21:36:52.032: INFO: Created: latency-svc-zqpd7 May 14 21:36:52.042: INFO: Got endpoints: latency-svc-zqpd7 [885.436262ms] May 14 21:36:52.075: INFO: Created: latency-svc-fb68q May 14 21:36:52.140: INFO: Got endpoints: latency-svc-fb68q [898.973243ms] May 14 21:36:52.159: INFO: Created: latency-svc-qsbpt May 14 21:36:52.175: INFO: Got endpoints: latency-svc-qsbpt [879.947481ms] May 14 21:36:52.278: INFO: Created: latency-svc-nthgl May 14 21:36:52.309: INFO: Got endpoints: latency-svc-nthgl [899.948093ms] May 14 21:36:52.309: INFO: Created: latency-svc-lszwj May 14 21:36:52.369: INFO: Got endpoints: latency-svc-lszwj [939.457586ms] May 14 21:36:52.433: INFO: Created: latency-svc-89mf6 May 14 21:36:52.440: INFO: Got endpoints: latency-svc-89mf6 [957.665306ms] May 14 21:36:52.461: INFO: Created: latency-svc-4ppd8 May 14 21:36:52.476: INFO: Got endpoints: latency-svc-4ppd8 [923.273588ms] May 14 21:36:52.498: INFO: Created: latency-svc-kpxtc May 14 21:36:52.512: INFO: Got endpoints: latency-svc-kpxtc [926.562243ms] May 14 21:36:52.577: INFO: Created: latency-svc-5gn2s May 14 21:36:52.591: INFO: Got endpoints: latency-svc-5gn2s [951.294646ms] May 14 21:36:52.621: INFO: Created: latency-svc-ncvq5 May 14 21:36:52.639: INFO: Got endpoints: latency-svc-ncvq5 [909.720994ms] May 14 21:36:52.732: INFO: Created: latency-svc-7rn6l May 14 21:36:52.735: INFO: Got endpoints: latency-svc-7rn6l [927.289284ms] May 14 21:36:52.888: INFO: Created: latency-svc-8ng88 May 14 21:36:52.896: INFO: Got endpoints: latency-svc-8ng88 [1.023267973s] May 14 21:36:52.923: INFO: Created: latency-svc-dp9rh May 14 21:36:52.948: INFO: Got endpoints: latency-svc-dp9rh [1.032410433s] May 14 21:36:52.969: INFO: Created: latency-svc-pnm9b May 14 21:36:52.981: INFO: Got endpoints: latency-svc-pnm9b [1.029475797s] May 14 21:36:53.032: INFO: Created: latency-svc-n2q45 May 14 21:36:53.041: INFO: Got endpoints: latency-svc-n2q45 [1.035089057s] May 14 21:36:53.098: INFO: Created: latency-svc-6g4lf May 14 21:36:53.120: INFO: Got endpoints: latency-svc-6g4lf [1.077672293s] May 14 21:36:53.169: INFO: Created: latency-svc-cqbn4 May 14 21:36:53.172: INFO: Got endpoints: latency-svc-cqbn4 [1.032412771s] May 14 21:36:53.233: INFO: Created: latency-svc-2xc98 May 14 21:36:53.246: INFO: Got endpoints: latency-svc-2xc98 [1.071144171s] May 14 21:36:53.313: INFO: Created: latency-svc-v74h7 May 14 21:36:53.318: INFO: Got endpoints: latency-svc-v74h7 [1.009349968s] May 14 21:36:53.350: INFO: Created: latency-svc-qcjxb May 14 21:36:53.373: INFO: Got endpoints: latency-svc-qcjxb [1.003473525s] May 14 21:36:53.395: INFO: Created: latency-svc-dzz6v May 14 21:36:53.409: INFO: Got endpoints: latency-svc-dzz6v [969.215807ms] May 14 21:36:53.457: INFO: Created: latency-svc-7b8vs May 14 21:36:53.464: INFO: Got endpoints: latency-svc-7b8vs [987.513608ms] May 14 21:36:53.487: INFO: Created: latency-svc-vxrn4 May 14 21:36:53.506: INFO: Got endpoints: latency-svc-vxrn4 [993.436518ms] May 14 21:36:53.542: INFO: Created: latency-svc-6kqd6 May 14 21:36:53.595: INFO: Got endpoints: latency-svc-6kqd6 [1.003989069s] May 14 21:36:53.597: INFO: Created: latency-svc-hjfzt May 14 21:36:53.614: INFO: Got endpoints: latency-svc-hjfzt [975.45801ms] May 14 21:36:53.640: INFO: Created: latency-svc-55wmr May 14 21:36:53.655: INFO: Got endpoints: latency-svc-55wmr [919.769947ms] May 14 21:36:53.688: INFO: Created: latency-svc-hf4hf May 14 21:36:53.768: INFO: Got endpoints: latency-svc-hf4hf [871.649532ms] May 14 21:36:53.770: INFO: Created: latency-svc-87m64 May 14 21:36:53.811: INFO: Got endpoints: latency-svc-87m64 [863.290072ms] May 14 21:36:53.859: INFO: Created: latency-svc-9kjkm May 14 21:36:53.948: INFO: Got endpoints: latency-svc-9kjkm [966.479006ms] May 14 21:36:54.010: INFO: Created: latency-svc-xx47p May 14 21:36:54.046: INFO: Got endpoints: latency-svc-xx47p [1.004508105s] May 14 21:36:54.131: INFO: Created: latency-svc-jj4dl May 14 21:36:54.160: INFO: Got endpoints: latency-svc-jj4dl [1.039624573s] May 14 21:36:54.214: INFO: Created: latency-svc-6xfl2 May 14 21:36:54.295: INFO: Got endpoints: latency-svc-6xfl2 [1.122933519s] May 14 21:36:54.301: INFO: Created: latency-svc-m28sv May 14 21:36:54.340: INFO: Got endpoints: latency-svc-m28sv [1.094160355s] May 14 21:36:54.363: INFO: Created: latency-svc-654c7 May 14 21:36:54.376: INFO: Got endpoints: latency-svc-654c7 [1.057927667s] May 14 21:36:54.427: INFO: Created: latency-svc-9vtfk May 14 21:36:54.430: INFO: Got endpoints: latency-svc-9vtfk [1.057328041s] May 14 21:36:54.496: INFO: Created: latency-svc-977cx May 14 21:36:54.509: INFO: Got endpoints: latency-svc-977cx [1.100052957s] May 14 21:36:54.600: INFO: Created: latency-svc-78gf9 May 14 21:36:54.605: INFO: Got endpoints: latency-svc-78gf9 [1.141517449s] May 14 21:36:54.658: INFO: Created: latency-svc-8sm4k May 14 21:36:54.822: INFO: Got endpoints: latency-svc-8sm4k [1.316499882s] May 14 21:36:54.824: INFO: Created: latency-svc-bbtvp May 14 21:36:54.888: INFO: Got endpoints: latency-svc-bbtvp [1.292758275s] May 14 21:36:54.992: INFO: Created: latency-svc-vpr6p May 14 21:36:55.020: INFO: Got endpoints: latency-svc-vpr6p [1.405222101s] May 14 21:36:55.060: INFO: Created: latency-svc-p8gj4 May 14 21:36:55.265: INFO: Got endpoints: latency-svc-p8gj4 [1.610744721s] May 14 21:36:55.428: INFO: Created: latency-svc-86wnq May 14 21:36:55.432: INFO: Got endpoints: latency-svc-86wnq [1.663726747s] May 14 21:36:55.466: INFO: Created: latency-svc-fz44q May 14 21:36:55.492: INFO: Got endpoints: latency-svc-fz44q [1.681059473s] May 14 21:36:55.571: INFO: Created: latency-svc-mgrtp May 14 21:36:55.586: INFO: Got endpoints: latency-svc-mgrtp [1.638401007s] May 14 21:36:55.621: INFO: Created: latency-svc-wsmcc May 14 21:36:55.638: INFO: Got endpoints: latency-svc-wsmcc [1.592687322s] May 14 21:36:55.715: INFO: Created: latency-svc-xqchw May 14 21:36:55.718: INFO: Got endpoints: latency-svc-xqchw [1.55879076s] May 14 21:36:55.744: INFO: Created: latency-svc-x7nd5 May 14 21:36:55.765: INFO: Got endpoints: latency-svc-x7nd5 [1.469860741s] May 14 21:36:55.787: INFO: Created: latency-svc-5cnkx May 14 21:36:55.801: INFO: Got endpoints: latency-svc-5cnkx [1.460615448s] May 14 21:36:55.858: INFO: Created: latency-svc-5vt7p May 14 21:36:55.861: INFO: Got endpoints: latency-svc-5vt7p [1.485051049s] May 14 21:36:55.934: INFO: Created: latency-svc-4zkn5 May 14 21:36:56.014: INFO: Got endpoints: latency-svc-4zkn5 [1.583280571s] May 14 21:36:56.047: INFO: Created: latency-svc-86s6s May 14 21:36:56.086: INFO: Got endpoints: latency-svc-86s6s [1.576636945s] May 14 21:36:56.110: INFO: Created: latency-svc-c2xq6 May 14 21:36:56.169: INFO: Got endpoints: latency-svc-c2xq6 [1.564163508s] May 14 21:36:56.181: INFO: Created: latency-svc-n9mf7 May 14 21:36:56.233: INFO: Got endpoints: latency-svc-n9mf7 [1.411199365s] May 14 21:36:56.270: INFO: Created: latency-svc-495ss May 14 21:36:56.319: INFO: Got endpoints: latency-svc-495ss [1.431261503s] May 14 21:36:56.329: INFO: Created: latency-svc-qnj7x May 14 21:36:56.345: INFO: Got endpoints: latency-svc-qnj7x [1.325186984s] May 14 21:36:56.368: INFO: Created: latency-svc-hjg5n May 14 21:36:56.381: INFO: Got endpoints: latency-svc-hjg5n [1.11587869s] May 14 21:36:56.481: INFO: Created: latency-svc-599gz May 14 21:36:56.483: INFO: Got endpoints: latency-svc-599gz [1.051290885s] May 14 21:36:56.516: INFO: Created: latency-svc-mbq57 May 14 21:36:56.532: INFO: Got endpoints: latency-svc-mbq57 [1.039445454s] May 14 21:36:56.570: INFO: Created: latency-svc-hp9tp May 14 21:36:56.636: INFO: Got endpoints: latency-svc-hp9tp [1.04967886s] May 14 21:36:56.638: INFO: Created: latency-svc-zl466 May 14 21:36:56.659: INFO: Got endpoints: latency-svc-zl466 [1.020736119s] May 14 21:36:56.719: INFO: Created: latency-svc-t647q May 14 21:36:56.786: INFO: Got endpoints: latency-svc-t647q [1.067833152s] May 14 21:36:56.793: INFO: Created: latency-svc-p5tx8 May 14 21:36:56.818: INFO: Got endpoints: latency-svc-p5tx8 [1.052736767s] May 14 21:36:56.866: INFO: Created: latency-svc-t5wfd May 14 21:36:56.925: INFO: Got endpoints: latency-svc-t5wfd [1.12369543s] May 14 21:36:56.947: INFO: Created: latency-svc-nsq2k May 14 21:36:56.966: INFO: Got endpoints: latency-svc-nsq2k [1.104369294s] May 14 21:36:57.003: INFO: Created: latency-svc-4vc52 May 14 21:36:57.022: INFO: Got endpoints: latency-svc-4vc52 [1.007958371s] May 14 21:36:57.092: INFO: Created: latency-svc-b86qj May 14 21:36:57.111: INFO: Got endpoints: latency-svc-b86qj [1.025021648s] May 14 21:36:57.169: INFO: Created: latency-svc-7lwh2 May 14 21:36:57.183: INFO: Got endpoints: latency-svc-7lwh2 [1.013300496s] May 14 21:36:57.266: INFO: Created: latency-svc-ns6wj May 14 21:36:57.273: INFO: Got endpoints: latency-svc-ns6wj [1.040063712s] May 14 21:36:57.298: INFO: Created: latency-svc-2h85s May 14 21:36:57.322: INFO: Got endpoints: latency-svc-2h85s [1.002902818s] May 14 21:36:57.349: INFO: Created: latency-svc-vlsrz May 14 21:36:57.358: INFO: Got endpoints: latency-svc-vlsrz [1.013303211s] May 14 21:36:57.415: INFO: Created: latency-svc-6wmtv May 14 21:36:57.424: INFO: Got endpoints: latency-svc-6wmtv [1.043163914s] May 14 21:36:57.425: INFO: Latencies: [69.182132ms 111.770165ms 154.116777ms 239.508859ms 256.389105ms 298.520496ms 349.698088ms 396.638634ms 508.278351ms 519.114428ms 601.932476ms 649.81932ms 723.03416ms 737.273948ms 742.01366ms 742.047211ms 746.65181ms 748.260483ms 753.201406ms 766.557131ms 767.504975ms 772.240576ms 779.362798ms 780.150967ms 782.953946ms 784.246346ms 789.974416ms 795.995554ms 798.524966ms 803.636609ms 813.879799ms 842.66844ms 849.588785ms 850.892194ms 856.23408ms 863.290072ms 866.286619ms 866.361478ms 868.196705ms 868.394194ms 871.649532ms 874.043778ms 875.293284ms 877.148494ms 879.947481ms 884.76327ms 885.436262ms 887.69852ms 887.919849ms 891.719191ms 892.257988ms 892.290193ms 893.671553ms 896.069208ms 898.010242ms 898.973243ms 899.403216ms 899.948093ms 900.331032ms 901.903889ms 908.947487ms 909.720994ms 913.240104ms 914.070268ms 914.48872ms 914.554626ms 915.733811ms 919.769947ms 923.273588ms 926.562243ms 927.289284ms 929.042308ms 939.457586ms 945.146385ms 945.687893ms 948.246844ms 951.294646ms 952.024626ms 953.926963ms 954.727358ms 957.665306ms 963.600699ms 963.784127ms 966.138549ms 966.479006ms 968.106435ms 968.950532ms 969.215807ms 969.675265ms 970.660992ms 973.250315ms 973.649416ms 975.45801ms 976.434163ms 979.52632ms 981.502637ms 983.591238ms 987.185121ms 987.513608ms 988.416908ms 991.36873ms 992.474438ms 992.831044ms 993.436518ms 996.805075ms 996.980778ms 997.595881ms 1.000686622s 1.001170519s 1.002902818s 1.003473525s 1.003989069s 1.004508105s 1.007958371s 1.009349968s 1.013300496s 1.013303211s 1.01529354s 1.020736119s 1.023267973s 1.025021648s 1.029475797s 1.032410433s 1.032412771s 1.035089057s 1.039445454s 1.039624573s 1.040063712s 1.0403856s 1.043163914s 1.046854915s 1.04967886s 1.051290885s 1.052736767s 1.052832788s 1.057142697s 1.057328041s 1.057927667s 1.059133827s 1.067833152s 1.071144171s 1.073977751s 1.075090555s 1.077672293s 1.079709461s 1.081126178s 1.082961791s 1.086114454s 1.089527943s 1.091043634s 1.094160355s 1.100052957s 1.100936689s 1.104369294s 1.11587869s 1.122933519s 1.12369543s 1.136778776s 1.141517449s 1.16703613s 1.168429019s 1.204309982s 1.205257174s 1.216478453s 1.216631122s 1.222601693s 1.230408753s 1.238493708s 1.255434916s 1.257432848s 1.270274872s 1.289243972s 1.292758275s 1.293473622s 1.301027109s 1.304740987s 1.316499882s 1.325186984s 1.334233745s 1.339905321s 1.340764935s 1.340776162s 1.34367438s 1.35568728s 1.370642532s 1.405222101s 1.411199365s 1.431261503s 1.460615448s 1.469860741s 1.485051049s 1.55879076s 1.564163508s 1.576636945s 1.583280571s 1.592687322s 1.610744721s 1.638401007s 1.663726747s 1.681059473s] May 14 21:36:57.425: INFO: 50 %ile: 991.36873ms May 14 21:36:57.425: INFO: 90 %ile: 1.340764935s May 14 21:36:57.425: INFO: 99 %ile: 1.663726747s May 14 21:36:57.425: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:36:57.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6239" for this suite. • [SLOW TEST:18.123 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":71,"skipped":1125,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:36:57.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 14 21:37:02.144: INFO: Successfully updated pod "labelsupdatef5913237-5be3-4366-b0e1-a5866a161fec" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:37:04.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9780" for this suite. • [SLOW TEST:6.769 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1130,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:37:04.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 14 21:37:12.681: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 21:37:12.686: INFO: Pod pod-with-prestop-exec-hook still exists May 14 21:37:14.686: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 21:37:14.744: INFO: Pod pod-with-prestop-exec-hook still exists May 14 21:37:16.686: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 21:37:16.777: INFO: Pod pod-with-prestop-exec-hook still exists May 14 21:37:18.686: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 21:37:18.698: INFO: Pod pod-with-prestop-exec-hook still exists May 14 21:37:20.686: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 21:37:20.706: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:37:20.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1246" for this suite. • [SLOW TEST:16.565 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1148,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:37:20.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 14 21:37:21.262: INFO: Waiting up to 5m0s for pod "pod-05891178-bf14-44ee-b30f-e4f3f9bbe14c" in namespace "emptydir-710" to be "success or failure" May 14 21:37:21.293: INFO: Pod "pod-05891178-bf14-44ee-b30f-e4f3f9bbe14c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.105803ms May 14 21:37:23.355: INFO: Pod "pod-05891178-bf14-44ee-b30f-e4f3f9bbe14c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09242285s May 14 21:37:25.370: INFO: Pod "pod-05891178-bf14-44ee-b30f-e4f3f9bbe14c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.108086355s STEP: Saw pod success May 14 21:37:25.370: INFO: Pod "pod-05891178-bf14-44ee-b30f-e4f3f9bbe14c" satisfied condition "success or failure" May 14 21:37:25.388: INFO: Trying to get logs from node jerma-worker2 pod pod-05891178-bf14-44ee-b30f-e4f3f9bbe14c container test-container: STEP: delete the pod May 14 21:37:25.583: INFO: Waiting for pod pod-05891178-bf14-44ee-b30f-e4f3f9bbe14c to disappear May 14 21:37:25.586: INFO: Pod pod-05891178-bf14-44ee-b30f-e4f3f9bbe14c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:37:25.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-710" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1172,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:37:25.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 14 21:37:25.728: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:37:25.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8847" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":75,"skipped":1212,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:37:25.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-de28615a-5484-4545-92df-fbcc63ed8e95 in namespace container-probe-156 May 14 21:37:32.158: INFO: Started pod liveness-de28615a-5484-4545-92df-fbcc63ed8e95 in namespace container-probe-156 STEP: checking the pod's current state and verifying that restartCount is present May 14 21:37:32.199: INFO: Initial restart count of pod liveness-de28615a-5484-4545-92df-fbcc63ed8e95 is 0 May 14 21:37:48.324: INFO: Restart count of pod container-probe-156/liveness-de28615a-5484-4545-92df-fbcc63ed8e95 is now 1 (16.124840144s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:37:48.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-156" for this suite. • [SLOW TEST:22.468 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1240,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:37:48.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:37:48.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 14 21:37:48.840: INFO: stderr: "" May 14 21:37:48.840: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:23:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:37:48.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6920" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":77,"skipped":1272,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:37:48.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 14 21:37:57.035: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 21:37:57.071: INFO: Pod pod-with-poststart-exec-hook still exists May 14 21:37:59.071: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 21:37:59.075: INFO: Pod pod-with-poststart-exec-hook still exists May 14 21:38:01.071: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 21:38:01.076: INFO: Pod pod-with-poststart-exec-hook still exists May 14 21:38:03.071: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 21:38:03.076: INFO: Pod pod-with-poststart-exec-hook still exists May 14 21:38:05.071: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 21:38:05.076: INFO: Pod pod-with-poststart-exec-hook still exists May 14 21:38:07.071: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 21:38:07.076: INFO: Pod pod-with-poststart-exec-hook still exists May 14 21:38:09.071: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 21:38:09.075: INFO: Pod pod-with-poststart-exec-hook still exists May 14 21:38:11.071: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 21:38:11.074: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:38:11.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6792" for this suite. • [SLOW TEST:22.228 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1301,"failed":0} S ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:38:11.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-7f9b7261-2355-4c49-942f-ca723ac74b91 STEP: Creating a pod to test consume secrets May 14 21:38:11.712: INFO: Waiting up to 5m0s for pod "pod-secrets-96b067f6-fe6d-49c2-b076-a8775354f41e" in namespace "secrets-8495" to be "success or failure" May 14 21:38:11.728: INFO: Pod "pod-secrets-96b067f6-fe6d-49c2-b076-a8775354f41e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.659658ms May 14 21:38:13.732: INFO: Pod "pod-secrets-96b067f6-fe6d-49c2-b076-a8775354f41e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019743645s May 14 21:38:15.736: INFO: Pod "pod-secrets-96b067f6-fe6d-49c2-b076-a8775354f41e": Phase="Running", Reason="", readiness=true. Elapsed: 4.023696925s May 14 21:38:17.740: INFO: Pod "pod-secrets-96b067f6-fe6d-49c2-b076-a8775354f41e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028015943s STEP: Saw pod success May 14 21:38:17.740: INFO: Pod "pod-secrets-96b067f6-fe6d-49c2-b076-a8775354f41e" satisfied condition "success or failure" May 14 21:38:17.743: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-96b067f6-fe6d-49c2-b076-a8775354f41e container secret-volume-test: STEP: delete the pod May 14 21:38:17.777: INFO: Waiting for pod pod-secrets-96b067f6-fe6d-49c2-b076-a8775354f41e to disappear May 14 21:38:17.794: INFO: Pod pod-secrets-96b067f6-fe6d-49c2-b076-a8775354f41e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:38:17.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8495" for this suite. STEP: Destroying namespace "secret-namespace-5965" for this suite. • [SLOW TEST:6.739 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:38:17.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 14 21:38:18.018: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e9ae72d-38af-47ea-898c-82f2d6c1c65a" in namespace "downward-api-4656" to be "success or failure" May 14 21:38:18.022: INFO: Pod "downwardapi-volume-4e9ae72d-38af-47ea-898c-82f2d6c1c65a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.494804ms May 14 21:38:20.063: INFO: Pod "downwardapi-volume-4e9ae72d-38af-47ea-898c-82f2d6c1c65a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044676588s May 14 21:38:22.067: INFO: Pod "downwardapi-volume-4e9ae72d-38af-47ea-898c-82f2d6c1c65a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048439175s STEP: Saw pod success May 14 21:38:22.067: INFO: Pod "downwardapi-volume-4e9ae72d-38af-47ea-898c-82f2d6c1c65a" satisfied condition "success or failure" May 14 21:38:22.069: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4e9ae72d-38af-47ea-898c-82f2d6c1c65a container client-container: STEP: delete the pod May 14 21:38:22.104: INFO: Waiting for pod downwardapi-volume-4e9ae72d-38af-47ea-898c-82f2d6c1c65a to disappear May 14 21:38:22.118: INFO: Pod downwardapi-volume-4e9ae72d-38af-47ea-898c-82f2d6c1c65a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:38:22.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4656" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1351,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:38:22.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 14 21:38:26.299: INFO: Pod pod-hostip-9f60177b-a4b2-481f-a7b0-bf9dfac06909 has hostIP: 172.17.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:38:26.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8987" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1356,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:38:26.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-169ff909-3df7-40a6-8d55-54791f2a69c3 STEP: Creating a pod to test consume secrets May 14 21:38:26.430: INFO: Waiting up to 5m0s for pod "pod-secrets-6660fbab-972b-4263-8f3d-7d9fd804a5b1" in namespace "secrets-9149" to be "success or failure" May 14 21:38:26.463: INFO: Pod "pod-secrets-6660fbab-972b-4263-8f3d-7d9fd804a5b1": Phase="Pending", Reason="", readiness=false. Elapsed: 32.112338ms May 14 21:38:28.467: INFO: Pod "pod-secrets-6660fbab-972b-4263-8f3d-7d9fd804a5b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036299349s May 14 21:38:30.471: INFO: Pod "pod-secrets-6660fbab-972b-4263-8f3d-7d9fd804a5b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040652617s STEP: Saw pod success May 14 21:38:30.471: INFO: Pod "pod-secrets-6660fbab-972b-4263-8f3d-7d9fd804a5b1" satisfied condition "success or failure" May 14 21:38:30.474: INFO: Trying to get logs from node jerma-worker pod pod-secrets-6660fbab-972b-4263-8f3d-7d9fd804a5b1 container secret-volume-test: STEP: delete the pod May 14 21:38:30.499: INFO: Waiting for pod pod-secrets-6660fbab-972b-4263-8f3d-7d9fd804a5b1 to disappear May 14 21:38:30.503: INFO: Pod pod-secrets-6660fbab-972b-4263-8f3d-7d9fd804a5b1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:38:30.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9149" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1381,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:38:30.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 14 21:38:30.637: INFO: Waiting up to 5m0s for pod "client-containers-94bcc2de-744c-48f4-b8ff-6133a330f61e" in namespace "containers-6330" to be "success or failure" May 14 21:38:30.641: INFO: Pod "client-containers-94bcc2de-744c-48f4-b8ff-6133a330f61e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02207ms May 14 21:38:32.806: INFO: Pod "client-containers-94bcc2de-744c-48f4-b8ff-6133a330f61e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168682035s May 14 21:38:34.810: INFO: Pod "client-containers-94bcc2de-744c-48f4-b8ff-6133a330f61e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172541752s May 14 21:38:36.813: INFO: Pod "client-containers-94bcc2de-744c-48f4-b8ff-6133a330f61e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.17564219s STEP: Saw pod success May 14 21:38:36.813: INFO: Pod "client-containers-94bcc2de-744c-48f4-b8ff-6133a330f61e" satisfied condition "success or failure" May 14 21:38:36.815: INFO: Trying to get logs from node jerma-worker2 pod client-containers-94bcc2de-744c-48f4-b8ff-6133a330f61e container test-container: STEP: delete the pod May 14 21:38:36.831: INFO: Waiting for pod client-containers-94bcc2de-744c-48f4-b8ff-6133a330f61e to disappear May 14 21:38:36.836: INFO: Pod client-containers-94bcc2de-744c-48f4-b8ff-6133a330f61e no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:38:36.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6330" for this suite. • [SLOW TEST:6.326 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1417,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:38:36.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2399.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2399.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2399.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2399.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 21:38:43.041: INFO: File wheezy_udp@dns-test-service-3.dns-2399.svc.cluster.local from pod dns-2399/dns-test-41a94b50-1992-4dae-ba47-0a8bc901bc06 contains '' instead of 'foo.example.com.' May 14 21:38:43.045: INFO: Lookups using dns-2399/dns-test-41a94b50-1992-4dae-ba47-0a8bc901bc06 failed for: [wheezy_udp@dns-test-service-3.dns-2399.svc.cluster.local] May 14 21:38:48.052: INFO: DNS probes using dns-test-41a94b50-1992-4dae-ba47-0a8bc901bc06 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2399.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2399.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2399.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2399.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 21:38:56.442: INFO: File wheezy_udp@dns-test-service-3.dns-2399.svc.cluster.local from pod dns-2399/dns-test-76741212-1c55-4e27-9df1-aa388a14fae2 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 21:38:56.445: INFO: File jessie_udp@dns-test-service-3.dns-2399.svc.cluster.local from pod dns-2399/dns-test-76741212-1c55-4e27-9df1-aa388a14fae2 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 21:38:56.445: INFO: Lookups using dns-2399/dns-test-76741212-1c55-4e27-9df1-aa388a14fae2 failed for: [wheezy_udp@dns-test-service-3.dns-2399.svc.cluster.local jessie_udp@dns-test-service-3.dns-2399.svc.cluster.local] May 14 21:39:01.449: INFO: File wheezy_udp@dns-test-service-3.dns-2399.svc.cluster.local from pod dns-2399/dns-test-76741212-1c55-4e27-9df1-aa388a14fae2 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 21:39:01.452: INFO: File jessie_udp@dns-test-service-3.dns-2399.svc.cluster.local from pod dns-2399/dns-test-76741212-1c55-4e27-9df1-aa388a14fae2 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 21:39:01.452: INFO: Lookups using dns-2399/dns-test-76741212-1c55-4e27-9df1-aa388a14fae2 failed for: [wheezy_udp@dns-test-service-3.dns-2399.svc.cluster.local jessie_udp@dns-test-service-3.dns-2399.svc.cluster.local] May 14 21:39:06.450: INFO: File wheezy_udp@dns-test-service-3.dns-2399.svc.cluster.local from pod dns-2399/dns-test-76741212-1c55-4e27-9df1-aa388a14fae2 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 21:39:06.453: INFO: File jessie_udp@dns-test-service-3.dns-2399.svc.cluster.local from pod dns-2399/dns-test-76741212-1c55-4e27-9df1-aa388a14fae2 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 21:39:06.453: INFO: Lookups using dns-2399/dns-test-76741212-1c55-4e27-9df1-aa388a14fae2 failed for: [wheezy_udp@dns-test-service-3.dns-2399.svc.cluster.local jessie_udp@dns-test-service-3.dns-2399.svc.cluster.local] May 14 21:39:11.454: INFO: File wheezy_udp@dns-test-service-3.dns-2399.svc.cluster.local from pod dns-2399/dns-test-76741212-1c55-4e27-9df1-aa388a14fae2 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 21:39:11.457: INFO: File jessie_udp@dns-test-service-3.dns-2399.svc.cluster.local from pod dns-2399/dns-test-76741212-1c55-4e27-9df1-aa388a14fae2 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 21:39:11.458: INFO: Lookups using dns-2399/dns-test-76741212-1c55-4e27-9df1-aa388a14fae2 failed for: [wheezy_udp@dns-test-service-3.dns-2399.svc.cluster.local jessie_udp@dns-test-service-3.dns-2399.svc.cluster.local] May 14 21:39:16.465: INFO: DNS probes using dns-test-76741212-1c55-4e27-9df1-aa388a14fae2 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2399.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2399.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2399.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2399.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 21:39:25.222: INFO: DNS probes using dns-test-1b071d94-6707-46ea-a79b-af996fa2e9ec succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:39:25.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2399" for this suite. • [SLOW TEST:48.510 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":84,"skipped":1423,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:39:25.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 14 21:39:25.479: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50645094-f691-4cd9-8eec-db59007d7750" in namespace "projected-8989" to be "success or failure" May 14 21:39:25.589: INFO: Pod "downwardapi-volume-50645094-f691-4cd9-8eec-db59007d7750": Phase="Pending", Reason="", readiness=false. Elapsed: 110.673498ms May 14 21:39:27.663: INFO: Pod "downwardapi-volume-50645094-f691-4cd9-8eec-db59007d7750": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18397755s May 14 21:39:29.667: INFO: Pod "downwardapi-volume-50645094-f691-4cd9-8eec-db59007d7750": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188417242s May 14 21:39:31.711: INFO: Pod "downwardapi-volume-50645094-f691-4cd9-8eec-db59007d7750": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.231967263s STEP: Saw pod success May 14 21:39:31.711: INFO: Pod "downwardapi-volume-50645094-f691-4cd9-8eec-db59007d7750" satisfied condition "success or failure" May 14 21:39:31.713: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-50645094-f691-4cd9-8eec-db59007d7750 container client-container: STEP: delete the pod May 14 21:39:31.740: INFO: Waiting for pod downwardapi-volume-50645094-f691-4cd9-8eec-db59007d7750 to disappear May 14 21:39:31.744: INFO: Pod downwardapi-volume-50645094-f691-4cd9-8eec-db59007d7750 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:39:31.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8989" for this suite. • [SLOW TEST:6.400 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1433,"failed":0} SSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:39:31.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-8802 STEP: creating replication controller nodeport-test in namespace services-8802 I0514 21:39:32.028041 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-8802, replica count: 2 I0514 21:39:35.078461 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 21:39:38.078675 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 14 21:39:38.078: INFO: Creating new exec pod May 14 21:39:43.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8802 execpodj5bjs -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 14 21:39:46.299: INFO: stderr: "I0514 21:39:46.184177 1406 log.go:172] (0xc0000f4e70) (0xc000685f40) Create stream\nI0514 21:39:46.184210 1406 log.go:172] (0xc0000f4e70) (0xc000685f40) Stream added, broadcasting: 1\nI0514 21:39:46.187241 1406 log.go:172] (0xc0000f4e70) Reply frame received for 1\nI0514 21:39:46.187304 1406 log.go:172] (0xc0000f4e70) (0xc0006086e0) Create stream\nI0514 21:39:46.187324 1406 log.go:172] (0xc0000f4e70) (0xc0006086e0) Stream added, broadcasting: 3\nI0514 21:39:46.188483 1406 log.go:172] (0xc0000f4e70) Reply frame received for 3\nI0514 21:39:46.188528 1406 log.go:172] (0xc0000f4e70) (0xc0002634a0) Create stream\nI0514 21:39:46.188542 1406 log.go:172] (0xc0000f4e70) (0xc0002634a0) Stream added, broadcasting: 5\nI0514 21:39:46.189776 1406 log.go:172] (0xc0000f4e70) Reply frame received for 5\nI0514 21:39:46.268035 1406 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0514 21:39:46.268070 1406 log.go:172] (0xc0002634a0) (5) Data frame handling\nI0514 21:39:46.268091 1406 log.go:172] (0xc0002634a0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0514 21:39:46.290781 1406 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0514 21:39:46.290811 1406 log.go:172] (0xc0002634a0) (5) Data frame handling\nI0514 21:39:46.290830 1406 log.go:172] (0xc0002634a0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0514 21:39:46.291248 1406 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0514 21:39:46.291267 1406 log.go:172] (0xc0002634a0) (5) Data frame handling\nI0514 21:39:46.291291 1406 log.go:172] (0xc0000f4e70) Data frame received for 3\nI0514 21:39:46.291301 1406 log.go:172] (0xc0006086e0) (3) Data frame handling\nI0514 21:39:46.293539 1406 log.go:172] (0xc0000f4e70) Data frame received for 1\nI0514 21:39:46.293558 1406 log.go:172] (0xc000685f40) (1) Data frame handling\nI0514 21:39:46.293569 1406 log.go:172] (0xc000685f40) (1) Data frame sent\nI0514 21:39:46.293739 1406 log.go:172] (0xc0000f4e70) (0xc000685f40) Stream removed, broadcasting: 1\nI0514 21:39:46.293816 1406 log.go:172] (0xc0000f4e70) Go away received\nI0514 21:39:46.294183 1406 log.go:172] (0xc0000f4e70) (0xc000685f40) Stream removed, broadcasting: 1\nI0514 21:39:46.294205 1406 log.go:172] (0xc0000f4e70) (0xc0006086e0) Stream removed, broadcasting: 3\nI0514 21:39:46.294216 1406 log.go:172] (0xc0000f4e70) (0xc0002634a0) Stream removed, broadcasting: 5\n" May 14 21:39:46.299: INFO: stdout: "" May 14 21:39:46.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8802 execpodj5bjs -- /bin/sh -x -c nc -zv -t -w 2 10.108.101.11 80' May 14 21:39:46.506: INFO: stderr: "I0514 21:39:46.428100 1439 log.go:172] (0xc0009e5290) (0xc0009a26e0) Create stream\nI0514 21:39:46.428162 1439 log.go:172] (0xc0009e5290) (0xc0009a26e0) Stream added, broadcasting: 1\nI0514 21:39:46.430757 1439 log.go:172] (0xc0009e5290) Reply frame received for 1\nI0514 21:39:46.430819 1439 log.go:172] (0xc0009e5290) (0xc000423400) Create stream\nI0514 21:39:46.430836 1439 log.go:172] (0xc0009e5290) (0xc000423400) Stream added, broadcasting: 3\nI0514 21:39:46.431871 1439 log.go:172] (0xc0009e5290) Reply frame received for 3\nI0514 21:39:46.431895 1439 log.go:172] (0xc0009e5290) (0xc0004234a0) Create stream\nI0514 21:39:46.431902 1439 log.go:172] (0xc0009e5290) (0xc0004234a0) Stream added, broadcasting: 5\nI0514 21:39:46.432714 1439 log.go:172] (0xc0009e5290) Reply frame received for 5\nI0514 21:39:46.500493 1439 log.go:172] (0xc0009e5290) Data frame received for 3\nI0514 21:39:46.500525 1439 log.go:172] (0xc000423400) (3) Data frame handling\nI0514 21:39:46.500613 1439 log.go:172] (0xc0009e5290) Data frame received for 5\nI0514 21:39:46.500665 1439 log.go:172] (0xc0004234a0) (5) Data frame handling\nI0514 21:39:46.500701 1439 log.go:172] (0xc0004234a0) (5) Data frame sent\nI0514 21:39:46.500729 1439 log.go:172] (0xc0009e5290) Data frame received for 5\nI0514 21:39:46.500752 1439 log.go:172] (0xc0004234a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.101.11 80\nConnection to 10.108.101.11 80 port [tcp/http] succeeded!\nI0514 21:39:46.502442 1439 log.go:172] (0xc0009e5290) Data frame received for 1\nI0514 21:39:46.502474 1439 log.go:172] (0xc0009a26e0) (1) Data frame handling\nI0514 21:39:46.502493 1439 log.go:172] (0xc0009a26e0) (1) Data frame sent\nI0514 21:39:46.502516 1439 log.go:172] (0xc0009e5290) (0xc0009a26e0) Stream removed, broadcasting: 1\nI0514 21:39:46.502545 1439 log.go:172] (0xc0009e5290) Go away received\nI0514 21:39:46.503155 1439 log.go:172] (0xc0009e5290) (0xc0009a26e0) Stream removed, broadcasting: 1\nI0514 21:39:46.503182 1439 log.go:172] (0xc0009e5290) (0xc000423400) Stream removed, broadcasting: 3\nI0514 21:39:46.503192 1439 log.go:172] (0xc0009e5290) (0xc0004234a0) Stream removed, broadcasting: 5\n" May 14 21:39:46.506: INFO: stdout: "" May 14 21:39:46.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8802 execpodj5bjs -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31428' May 14 21:39:46.700: INFO: stderr: "I0514 21:39:46.631185 1459 log.go:172] (0xc0000ed1e0) (0xc0005ebea0) Create stream\nI0514 21:39:46.631257 1459 log.go:172] (0xc0000ed1e0) (0xc0005ebea0) Stream added, broadcasting: 1\nI0514 21:39:46.634394 1459 log.go:172] (0xc0000ed1e0) Reply frame received for 1\nI0514 21:39:46.634443 1459 log.go:172] (0xc0000ed1e0) (0xc000570780) Create stream\nI0514 21:39:46.634456 1459 log.go:172] (0xc0000ed1e0) (0xc000570780) Stream added, broadcasting: 3\nI0514 21:39:46.635347 1459 log.go:172] (0xc0000ed1e0) Reply frame received for 3\nI0514 21:39:46.635371 1459 log.go:172] (0xc0000ed1e0) (0xc0005ebf40) Create stream\nI0514 21:39:46.635378 1459 log.go:172] (0xc0000ed1e0) (0xc0005ebf40) Stream added, broadcasting: 5\nI0514 21:39:46.636237 1459 log.go:172] (0xc0000ed1e0) Reply frame received for 5\nI0514 21:39:46.692390 1459 log.go:172] (0xc0000ed1e0) Data frame received for 3\nI0514 21:39:46.692440 1459 log.go:172] (0xc000570780) (3) Data frame handling\nI0514 21:39:46.692475 1459 log.go:172] (0xc0000ed1e0) Data frame received for 5\nI0514 21:39:46.692494 1459 log.go:172] (0xc0005ebf40) (5) Data frame handling\nI0514 21:39:46.692524 1459 log.go:172] (0xc0005ebf40) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 31428\nConnection to 172.17.0.10 31428 port [tcp/31428] succeeded!\nI0514 21:39:46.692649 1459 log.go:172] (0xc0000ed1e0) Data frame received for 5\nI0514 21:39:46.692682 1459 log.go:172] (0xc0005ebf40) (5) Data frame handling\nI0514 21:39:46.694248 1459 log.go:172] (0xc0000ed1e0) Data frame received for 1\nI0514 21:39:46.694273 1459 log.go:172] (0xc0005ebea0) (1) Data frame handling\nI0514 21:39:46.694308 1459 log.go:172] (0xc0005ebea0) (1) Data frame sent\nI0514 21:39:46.694335 1459 log.go:172] (0xc0000ed1e0) (0xc0005ebea0) Stream removed, broadcasting: 1\nI0514 21:39:46.694424 1459 log.go:172] (0xc0000ed1e0) Go away received\nI0514 21:39:46.694860 1459 log.go:172] (0xc0000ed1e0) (0xc0005ebea0) Stream removed, broadcasting: 1\nI0514 21:39:46.694887 1459 log.go:172] (0xc0000ed1e0) (0xc000570780) Stream removed, broadcasting: 3\nI0514 21:39:46.694906 1459 log.go:172] (0xc0000ed1e0) (0xc0005ebf40) Stream removed, broadcasting: 5\n" May 14 21:39:46.700: INFO: stdout: "" May 14 21:39:46.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8802 execpodj5bjs -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31428' May 14 21:39:46.909: INFO: stderr: "I0514 21:39:46.835356 1480 log.go:172] (0xc0001062c0) (0xc000a26000) Create stream\nI0514 21:39:46.835446 1480 log.go:172] (0xc0001062c0) (0xc000a26000) Stream added, broadcasting: 1\nI0514 21:39:46.838591 1480 log.go:172] (0xc0001062c0) Reply frame received for 1\nI0514 21:39:46.838637 1480 log.go:172] (0xc0001062c0) (0xc000b140a0) Create stream\nI0514 21:39:46.838648 1480 log.go:172] (0xc0001062c0) (0xc000b140a0) Stream added, broadcasting: 3\nI0514 21:39:46.839825 1480 log.go:172] (0xc0001062c0) Reply frame received for 3\nI0514 21:39:46.839896 1480 log.go:172] (0xc0001062c0) (0xc000b14140) Create stream\nI0514 21:39:46.839918 1480 log.go:172] (0xc0001062c0) (0xc000b14140) Stream added, broadcasting: 5\nI0514 21:39:46.841389 1480 log.go:172] (0xc0001062c0) Reply frame received for 5\nI0514 21:39:46.902433 1480 log.go:172] (0xc0001062c0) Data frame received for 3\nI0514 21:39:46.902470 1480 log.go:172] (0xc000b140a0) (3) Data frame handling\nI0514 21:39:46.902498 1480 log.go:172] (0xc0001062c0) Data frame received for 5\nI0514 21:39:46.902505 1480 log.go:172] (0xc000b14140) (5) Data frame handling\nI0514 21:39:46.902512 1480 log.go:172] (0xc000b14140) (5) Data frame sent\nI0514 21:39:46.902519 1480 log.go:172] (0xc0001062c0) Data frame received for 5\nI0514 21:39:46.902525 1480 log.go:172] (0xc000b14140) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 31428\nConnection to 172.17.0.8 31428 port [tcp/31428] succeeded!\nI0514 21:39:46.904006 1480 log.go:172] (0xc0001062c0) Data frame received for 1\nI0514 21:39:46.904018 1480 log.go:172] (0xc000a26000) (1) Data frame handling\nI0514 21:39:46.904030 1480 log.go:172] (0xc000a26000) (1) Data frame sent\nI0514 21:39:46.904047 1480 log.go:172] (0xc0001062c0) (0xc000a26000) Stream removed, broadcasting: 1\nI0514 21:39:46.904129 1480 log.go:172] (0xc0001062c0) Go away received\nI0514 21:39:46.904347 1480 log.go:172] (0xc0001062c0) (0xc000a26000) Stream removed, broadcasting: 1\nI0514 21:39:46.904367 1480 log.go:172] (0xc0001062c0) (0xc000b140a0) Stream removed, broadcasting: 3\nI0514 21:39:46.904378 1480 log.go:172] (0xc0001062c0) (0xc000b14140) Stream removed, broadcasting: 5\n" May 14 21:39:46.909: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:39:46.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8802" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:15.167 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":86,"skipped":1436,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:39:46.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 14 21:39:46.991: INFO: Waiting up to 5m0s for pod "pod-6edd69a4-bda6-4225-943b-78c4fd26b3b6" in namespace "emptydir-4276" to be "success or failure" May 14 21:39:47.047: INFO: Pod "pod-6edd69a4-bda6-4225-943b-78c4fd26b3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 55.777277ms May 14 21:39:49.051: INFO: Pod "pod-6edd69a4-bda6-4225-943b-78c4fd26b3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06030828s May 14 21:39:51.055: INFO: Pod "pod-6edd69a4-bda6-4225-943b-78c4fd26b3b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064065587s STEP: Saw pod success May 14 21:39:51.055: INFO: Pod "pod-6edd69a4-bda6-4225-943b-78c4fd26b3b6" satisfied condition "success or failure" May 14 21:39:51.058: INFO: Trying to get logs from node jerma-worker pod pod-6edd69a4-bda6-4225-943b-78c4fd26b3b6 container test-container: STEP: delete the pod May 14 21:39:51.144: INFO: Waiting for pod pod-6edd69a4-bda6-4225-943b-78c4fd26b3b6 to disappear May 14 21:39:51.174: INFO: Pod pod-6edd69a4-bda6-4225-943b-78c4fd26b3b6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:39:51.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4276" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1475,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:39:51.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-b5bedf07-6c20-4a7c-86d6-8e37160b7deb STEP: Creating configMap with name cm-test-opt-upd-bf8b1d23-d062-4402-8796-274e90b57e78 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b5bedf07-6c20-4a7c-86d6-8e37160b7deb STEP: Updating configmap cm-test-opt-upd-bf8b1d23-d062-4402-8796-274e90b57e78 STEP: Creating configMap with name cm-test-opt-create-9401a947-67ea-45b4-946f-451e153863ad STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:40:01.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7072" for this suite. • [SLOW TEST:10.412 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1485,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:40:01.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 14 21:40:01.714: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7699 /api/v1/namespaces/watch-7699/configmaps/e2e-watch-test-watch-closed 23fb2dff-2077-4924-9a2a-f06838877974 16209459 0 2020-05-14 21:40:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 14 21:40:01.714: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7699 /api/v1/namespaces/watch-7699/configmaps/e2e-watch-test-watch-closed 23fb2dff-2077-4924-9a2a-f06838877974 16209460 0 2020-05-14 21:40:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 14 21:40:01.767: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7699 /api/v1/namespaces/watch-7699/configmaps/e2e-watch-test-watch-closed 23fb2dff-2077-4924-9a2a-f06838877974 16209461 0 2020-05-14 21:40:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 14 21:40:01.767: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7699 /api/v1/namespaces/watch-7699/configmaps/e2e-watch-test-watch-closed 23fb2dff-2077-4924-9a2a-f06838877974 16209462 0 2020-05-14 21:40:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:40:01.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7699" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":89,"skipped":1508,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:40:01.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8729.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8729.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8729.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8729.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8729.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8729.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 21:40:10.065: INFO: DNS probes using dns-8729/dns-test-2b1119ce-9167-420b-b6c2-99acbe250bb4 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:40:10.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8729" for this suite. • [SLOW TEST:8.940 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":90,"skipped":1516,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:40:10.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:40:11.434: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 14 21:40:11.783: INFO: Pod name sample-pod: Found 0 pods out of 1 May 14 21:40:16.790: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 14 21:40:16.790: INFO: Creating deployment "test-rolling-update-deployment" May 14 21:40:16.795: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 14 21:40:16.809: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 14 21:40:18.817: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 14 21:40:18.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089216, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089216, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089216, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089216, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 21:40:20.824: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 14 21:40:20.880: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4628 /apis/apps/v1/namespaces/deployment-4628/deployments/test-rolling-update-deployment 72265052-b8c6-4138-98db-1ac0db94465c 16209632 1 2020-05-14 21:40:16 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cfb158 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-14 21:40:16 +0000 UTC,LastTransitionTime:2020-05-14 21:40:16 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-14 21:40:19 +0000 UTC,LastTransitionTime:2020-05-14 21:40:16 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 14 21:40:20.883: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-4628 /apis/apps/v1/namespaces/deployment-4628/replicasets/test-rolling-update-deployment-67cf4f6444 777803b4-f216-4f27-b07a-5bbfffd00d28 16209620 1 2020-05-14 21:40:16 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 72265052-b8c6-4138-98db-1ac0db94465c 0xc002cfb887 0xc002cfb888}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cfb8f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 14 21:40:20.883: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 14 21:40:20.884: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4628 /apis/apps/v1/namespaces/deployment-4628/replicasets/test-rolling-update-controller efd6e408-f0cb-431a-b9b6-1c6831c903e0 16209629 2 2020-05-14 21:40:11 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 72265052-b8c6-4138-98db-1ac0db94465c 0xc002cfb727 0xc002cfb728}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002cfb818 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 14 21:40:20.887: INFO: Pod "test-rolling-update-deployment-67cf4f6444-hg9rp" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-hg9rp test-rolling-update-deployment-67cf4f6444- deployment-4628 /api/v1/namespaces/deployment-4628/pods/test-rolling-update-deployment-67cf4f6444-hg9rp dfcdb545-c180-40f4-95a7-5813b1f66752 16209619 0 2020-05-14 21:40:16 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 777803b4-f216-4f27-b07a-5bbfffd00d28 0xc003026057 0xc003026058}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sjl7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sjl7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sjl7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 21:40:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 21:40:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 21:40:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 21:40:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.71,StartTime:2020-05-14 21:40:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 21:40:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://aa65150ec57c6c7c4ce72d25cc3bb6ee956394091ab38214db74d5b49bdb3944,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.71,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:40:20.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4628" for this suite. • [SLOW TEST:10.147 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":91,"skipped":1530,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:40:20.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 14 21:40:21.074: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 14 21:40:21.126: INFO: Waiting for terminating namespaces to be deleted... May 14 21:40:21.128: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 14 21:40:21.132: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 21:40:21.132: INFO: Container kindnet-cni ready: true, restart count 0 May 14 21:40:21.132: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 21:40:21.132: INFO: Container kube-proxy ready: true, restart count 0 May 14 21:40:21.132: INFO: test-rolling-update-deployment-67cf4f6444-hg9rp from deployment-4628 started at 2020-05-14 21:40:16 +0000 UTC (1 container statuses recorded) May 14 21:40:21.132: INFO: Container agnhost ready: true, restart count 0 May 14 21:40:21.132: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 14 21:40:21.138: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 21:40:21.138: INFO: Container kindnet-cni ready: true, restart count 0 May 14 21:40:21.138: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 14 21:40:21.138: INFO: Container kube-bench ready: false, restart count 0 May 14 21:40:21.138: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 21:40:21.138: INFO: Container kube-proxy ready: true, restart count 0 May 14 21:40:21.138: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 14 21:40:21.138: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-73409a21-380f-4984-b651-acb3c2b1c265 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-73409a21-380f-4984-b651-acb3c2b1c265 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-73409a21-380f-4984-b651-acb3c2b1c265 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:40:29.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4183" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.383 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":92,"skipped":1536,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:40:29.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:40:29.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6804" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1555,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:40:29.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 21:40:30.077: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 21:40:32.277: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089230, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089230, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089230, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089230, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 21:40:35.347: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:40:35.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4344" for this suite. STEP: Destroying namespace "webhook-4344-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.235 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":94,"skipped":1605,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:40:35.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0514 21:40:48.408583 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 21:40:48.408: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:40:48.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9264" for this suite. • [SLOW TEST:12.748 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":95,"skipped":1622,"failed":0} S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:40:48.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:40:48.577: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 7.680827ms) May 14 21:40:48.582: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.492816ms) May 14 21:40:48.605: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 22.724564ms) May 14 21:40:48.619: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 13.85273ms) May 14 21:40:48.622: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.649201ms) May 14 21:40:48.625: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.937969ms) May 14 21:40:48.628: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.906674ms) May 14 21:40:48.631: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.979777ms) May 14 21:40:48.700: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 68.553342ms) May 14 21:40:48.703: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.356878ms) May 14 21:40:48.749: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 45.292469ms) May 14 21:40:48.780: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 30.950436ms) May 14 21:40:48.784: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.667799ms) May 14 21:40:48.787: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.8687ms) May 14 21:40:48.828: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 40.719137ms) May 14 21:40:48.832: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.760243ms) May 14 21:40:48.836: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.708986ms) May 14 21:40:48.839: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.868756ms) May 14 21:40:48.842: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.049131ms) May 14 21:40:48.917: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 74.849359ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:40:48.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4571" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":96,"skipped":1623,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:40:48.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-488a117e-20fc-4eae-b43b-56ea35e5098d STEP: Creating a pod to test consume configMaps May 14 21:40:49.356: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a37370ae-cb70-4f5d-ba04-443e205d8ca0" in namespace "projected-4282" to be "success or failure" May 14 21:40:49.425: INFO: Pod "pod-projected-configmaps-a37370ae-cb70-4f5d-ba04-443e205d8ca0": Phase="Pending", Reason="", readiness=false. Elapsed: 69.345494ms May 14 21:40:51.428: INFO: Pod "pod-projected-configmaps-a37370ae-cb70-4f5d-ba04-443e205d8ca0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072043222s May 14 21:40:53.432: INFO: Pod "pod-projected-configmaps-a37370ae-cb70-4f5d-ba04-443e205d8ca0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07641417s STEP: Saw pod success May 14 21:40:53.432: INFO: Pod "pod-projected-configmaps-a37370ae-cb70-4f5d-ba04-443e205d8ca0" satisfied condition "success or failure" May 14 21:40:53.436: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-a37370ae-cb70-4f5d-ba04-443e205d8ca0 container projected-configmap-volume-test: STEP: delete the pod May 14 21:40:53.739: INFO: Waiting for pod pod-projected-configmaps-a37370ae-cb70-4f5d-ba04-443e205d8ca0 to disappear May 14 21:40:53.807: INFO: Pod pod-projected-configmaps-a37370ae-cb70-4f5d-ba04-443e205d8ca0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:40:53.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4282" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1637,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:40:53.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 14 21:40:58.536: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:40:58.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8185" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1658,"failed":0} SSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:40:58.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-731 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-731 to expose endpoints map[] May 14 21:40:59.114: INFO: successfully validated that service multi-endpoint-test in namespace services-731 exposes endpoints map[] (12.771008ms elapsed) STEP: Creating pod pod1 in namespace services-731 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-731 to expose endpoints map[pod1:[100]] May 14 21:41:02.384: INFO: successfully validated that service multi-endpoint-test in namespace services-731 exposes endpoints map[pod1:[100]] (3.219334404s elapsed) STEP: Creating pod pod2 in namespace services-731 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-731 to expose endpoints map[pod1:[100] pod2:[101]] May 14 21:41:05.751: INFO: successfully validated that service multi-endpoint-test in namespace services-731 exposes endpoints map[pod1:[100] pod2:[101]] (3.362789814s elapsed) STEP: Deleting pod pod1 in namespace services-731 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-731 to expose endpoints map[pod2:[101]] May 14 21:41:06.895: INFO: successfully validated that service multi-endpoint-test in namespace services-731 exposes endpoints map[pod2:[101]] (1.138480588s elapsed) STEP: Deleting pod pod2 in namespace services-731 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-731 to expose endpoints map[] May 14 21:41:07.035: INFO: successfully validated that service multi-endpoint-test in namespace services-731 exposes endpoints map[] (116.950067ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:41:07.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-731" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:8.658 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":99,"skipped":1662,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:41:07.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0514 21:41:37.928548 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 21:41:37.928: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:41:37.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3541" for this suite. • [SLOW TEST:30.685 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":100,"skipped":1685,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:41:37.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:41:38.034: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:41:42.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-716" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1691,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:41:42.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 14 21:41:42.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2411' May 14 21:41:42.381: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 14 21:41:42.381: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: rolling-update to same image controller May 14 21:41:42.404: INFO: scanned /root for discovery docs: May 14 21:41:42.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2411' May 14 21:41:59.572: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 14 21:41:59.572: INFO: stdout: "Created e2e-test-httpd-rc-61e86bb7cb57708479c85505bdea55e5\nScaling up e2e-test-httpd-rc-61e86bb7cb57708479c85505bdea55e5 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-61e86bb7cb57708479c85505bdea55e5 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-61e86bb7cb57708479c85505bdea55e5 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 14 21:41:59.572: INFO: stdout: "Created e2e-test-httpd-rc-61e86bb7cb57708479c85505bdea55e5\nScaling up e2e-test-httpd-rc-61e86bb7cb57708479c85505bdea55e5 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-61e86bb7cb57708479c85505bdea55e5 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-61e86bb7cb57708479c85505bdea55e5 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 14 21:41:59.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-2411' May 14 21:41:59.675: INFO: stderr: "" May 14 21:41:59.675: INFO: stdout: "e2e-test-httpd-rc-61e86bb7cb57708479c85505bdea55e5-8blm8 " May 14 21:41:59.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-61e86bb7cb57708479c85505bdea55e5-8blm8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2411' May 14 21:41:59.780: INFO: stderr: "" May 14 21:41:59.780: INFO: stdout: "true" May 14 21:41:59.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-61e86bb7cb57708479c85505bdea55e5-8blm8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2411' May 14 21:41:59.887: INFO: stderr: "" May 14 21:41:59.887: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 14 21:41:59.887: INFO: e2e-test-httpd-rc-61e86bb7cb57708479c85505bdea55e5-8blm8 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 14 21:41:59.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2411' May 14 21:41:59.994: INFO: stderr: "" May 14 21:41:59.994: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:41:59.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2411" for this suite. • [SLOW TEST:17.828 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":102,"skipped":1707,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:42:00.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-09e08fb7-c552-4457-b99d-f8c112cf9574 STEP: Creating a pod to test consume configMaps May 14 21:42:00.130: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3eb77d41-38b8-4a20-bf2b-cb7afb82e9fe" in namespace "projected-8042" to be "success or failure" May 14 21:42:00.152: INFO: Pod "pod-projected-configmaps-3eb77d41-38b8-4a20-bf2b-cb7afb82e9fe": Phase="Pending", Reason="", readiness=false. Elapsed: 21.620365ms May 14 21:42:02.192: INFO: Pod "pod-projected-configmaps-3eb77d41-38b8-4a20-bf2b-cb7afb82e9fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06201558s May 14 21:42:04.210: INFO: Pod "pod-projected-configmaps-3eb77d41-38b8-4a20-bf2b-cb7afb82e9fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080027878s STEP: Saw pod success May 14 21:42:04.210: INFO: Pod "pod-projected-configmaps-3eb77d41-38b8-4a20-bf2b-cb7afb82e9fe" satisfied condition "success or failure" May 14 21:42:04.213: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-3eb77d41-38b8-4a20-bf2b-cb7afb82e9fe container projected-configmap-volume-test: STEP: delete the pod May 14 21:42:04.252: INFO: Waiting for pod pod-projected-configmaps-3eb77d41-38b8-4a20-bf2b-cb7afb82e9fe to disappear May 14 21:42:04.291: INFO: Pod pod-projected-configmaps-3eb77d41-38b8-4a20-bf2b-cb7afb82e9fe no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:42:04.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8042" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1723,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:42:04.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9543 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9543 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9543 May 14 21:42:04.703: INFO: Found 0 stateful pods, waiting for 1 May 14 21:42:14.707: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 14 21:42:14.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9543 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 21:42:14.975: INFO: stderr: "I0514 21:42:14.823462 1628 log.go:172] (0xc000b0b3f0) (0xc000866500) Create stream\nI0514 21:42:14.823513 1628 log.go:172] (0xc000b0b3f0) (0xc000866500) Stream added, broadcasting: 1\nI0514 21:42:14.828005 1628 log.go:172] (0xc000b0b3f0) Reply frame received for 1\nI0514 21:42:14.828051 1628 log.go:172] (0xc000b0b3f0) (0xc0005c4640) Create stream\nI0514 21:42:14.828064 1628 log.go:172] (0xc000b0b3f0) (0xc0005c4640) Stream added, broadcasting: 3\nI0514 21:42:14.828874 1628 log.go:172] (0xc000b0b3f0) Reply frame received for 3\nI0514 21:42:14.828904 1628 log.go:172] (0xc000b0b3f0) (0xc0006fbe00) Create stream\nI0514 21:42:14.828914 1628 log.go:172] (0xc000b0b3f0) (0xc0006fbe00) Stream added, broadcasting: 5\nI0514 21:42:14.829878 1628 log.go:172] (0xc000b0b3f0) Reply frame received for 5\nI0514 21:42:14.946823 1628 log.go:172] (0xc000b0b3f0) Data frame received for 5\nI0514 21:42:14.946850 1628 log.go:172] (0xc0006fbe00) (5) Data frame handling\nI0514 21:42:14.946872 1628 log.go:172] (0xc0006fbe00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 21:42:14.970158 1628 log.go:172] (0xc000b0b3f0) Data frame received for 3\nI0514 21:42:14.970183 1628 log.go:172] (0xc0005c4640) (3) Data frame handling\nI0514 21:42:14.970190 1628 log.go:172] (0xc0005c4640) (3) Data frame sent\nI0514 21:42:14.970195 1628 log.go:172] (0xc000b0b3f0) Data frame received for 3\nI0514 21:42:14.970199 1628 log.go:172] (0xc0005c4640) (3) Data frame handling\nI0514 21:42:14.970219 1628 log.go:172] (0xc000b0b3f0) Data frame received for 5\nI0514 21:42:14.970224 1628 log.go:172] (0xc0006fbe00) (5) Data frame handling\nI0514 21:42:14.971440 1628 log.go:172] (0xc000b0b3f0) Data frame received for 1\nI0514 21:42:14.971452 1628 log.go:172] (0xc000866500) (1) Data frame handling\nI0514 21:42:14.971460 1628 log.go:172] (0xc000866500) (1) Data frame sent\nI0514 21:42:14.971469 1628 log.go:172] (0xc000b0b3f0) (0xc000866500) Stream removed, broadcasting: 1\nI0514 21:42:14.971678 1628 log.go:172] (0xc000b0b3f0) (0xc000866500) Stream removed, broadcasting: 1\nI0514 21:42:14.971687 1628 log.go:172] (0xc000b0b3f0) (0xc0005c4640) Stream removed, broadcasting: 3\nI0514 21:42:14.971693 1628 log.go:172] (0xc000b0b3f0) (0xc0006fbe00) Stream removed, broadcasting: 5\nI0514 21:42:14.971727 1628 log.go:172] (0xc000b0b3f0) Go away received\n" May 14 21:42:14.975: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 21:42:14.975: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 14 21:42:14.979: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 14 21:42:24.985: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 14 21:42:24.985: INFO: Waiting for statefulset status.replicas updated to 0 May 14 21:42:25.003: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999645s May 14 21:42:26.012: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990406645s May 14 21:42:27.016: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.981280586s May 14 21:42:28.020: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.97740746s May 14 21:42:29.027: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.972909745s May 14 21:42:30.042: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.967017853s May 14 21:42:31.047: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.951510418s May 14 21:42:32.068: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.946477341s May 14 21:42:33.100: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.925944743s May 14 21:42:34.114: INFO: Verifying statefulset ss doesn't scale past 1 for another 893.165161ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9543 May 14 21:42:35.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9543 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:42:35.304: INFO: stderr: "I0514 21:42:35.236903 1649 log.go:172] (0xc00077cb00) (0xc00076a000) Create stream\nI0514 21:42:35.236960 1649 log.go:172] (0xc00077cb00) (0xc00076a000) Stream added, broadcasting: 1\nI0514 21:42:35.239108 1649 log.go:172] (0xc00077cb00) Reply frame received for 1\nI0514 21:42:35.239142 1649 log.go:172] (0xc00077cb00) (0xc000595ae0) Create stream\nI0514 21:42:35.239151 1649 log.go:172] (0xc00077cb00) (0xc000595ae0) Stream added, broadcasting: 3\nI0514 21:42:35.239862 1649 log.go:172] (0xc00077cb00) Reply frame received for 3\nI0514 21:42:35.239907 1649 log.go:172] (0xc00077cb00) (0xc000595cc0) Create stream\nI0514 21:42:35.239920 1649 log.go:172] (0xc00077cb00) (0xc000595cc0) Stream added, broadcasting: 5\nI0514 21:42:35.240591 1649 log.go:172] (0xc00077cb00) Reply frame received for 5\nI0514 21:42:35.299497 1649 log.go:172] (0xc00077cb00) Data frame received for 5\nI0514 21:42:35.299549 1649 log.go:172] (0xc000595cc0) (5) Data frame handling\nI0514 21:42:35.299587 1649 log.go:172] (0xc000595cc0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0514 21:42:35.299898 1649 log.go:172] (0xc00077cb00) Data frame received for 3\nI0514 21:42:35.299915 1649 log.go:172] (0xc000595ae0) (3) Data frame handling\nI0514 21:42:35.299925 1649 log.go:172] (0xc000595ae0) (3) Data frame sent\nI0514 21:42:35.299932 1649 log.go:172] (0xc00077cb00) Data frame received for 3\nI0514 21:42:35.299938 1649 log.go:172] (0xc000595ae0) (3) Data frame handling\nI0514 21:42:35.299954 1649 log.go:172] (0xc00077cb00) Data frame received for 5\nI0514 21:42:35.299972 1649 log.go:172] (0xc000595cc0) (5) Data frame handling\nI0514 21:42:35.301050 1649 log.go:172] (0xc00077cb00) Data frame received for 1\nI0514 21:42:35.301066 1649 log.go:172] (0xc00076a000) (1) Data frame handling\nI0514 21:42:35.301076 1649 log.go:172] (0xc00076a000) (1) Data frame sent\nI0514 21:42:35.301309 1649 log.go:172] (0xc00077cb00) (0xc00076a000) Stream removed, broadcasting: 1\nI0514 21:42:35.301601 1649 log.go:172] (0xc00077cb00) (0xc00076a000) Stream removed, broadcasting: 1\nI0514 21:42:35.301615 1649 log.go:172] (0xc00077cb00) (0xc000595ae0) Stream removed, broadcasting: 3\nI0514 21:42:35.301624 1649 log.go:172] (0xc00077cb00) (0xc000595cc0) Stream removed, broadcasting: 5\n" May 14 21:42:35.304: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 14 21:42:35.304: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 14 21:42:35.306: INFO: Found 1 stateful pods, waiting for 3 May 14 21:42:45.311: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 14 21:42:45.311: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 14 21:42:45.311: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 14 21:42:45.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9543 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 21:42:45.511: INFO: stderr: "I0514 21:42:45.442753 1669 log.go:172] (0xc000a06000) (0xc000a4c000) Create stream\nI0514 21:42:45.442841 1669 log.go:172] (0xc000a06000) (0xc000a4c000) Stream added, broadcasting: 1\nI0514 21:42:45.445859 1669 log.go:172] (0xc000a06000) Reply frame received for 1\nI0514 21:42:45.445886 1669 log.go:172] (0xc000a06000) (0xc000a72640) Create stream\nI0514 21:42:45.445894 1669 log.go:172] (0xc000a06000) (0xc000a72640) Stream added, broadcasting: 3\nI0514 21:42:45.446784 1669 log.go:172] (0xc000a06000) Reply frame received for 3\nI0514 21:42:45.446818 1669 log.go:172] (0xc000a06000) (0xc0005a1e00) Create stream\nI0514 21:42:45.446829 1669 log.go:172] (0xc000a06000) (0xc0005a1e00) Stream added, broadcasting: 5\nI0514 21:42:45.447645 1669 log.go:172] (0xc000a06000) Reply frame received for 5\nI0514 21:42:45.505525 1669 log.go:172] (0xc000a06000) Data frame received for 3\nI0514 21:42:45.505589 1669 log.go:172] (0xc000a72640) (3) Data frame handling\nI0514 21:42:45.505624 1669 log.go:172] (0xc000a72640) (3) Data frame sent\nI0514 21:42:45.505646 1669 log.go:172] (0xc000a06000) Data frame received for 3\nI0514 21:42:45.505663 1669 log.go:172] (0xc000a72640) (3) Data frame handling\nI0514 21:42:45.505682 1669 log.go:172] (0xc000a06000) Data frame received for 5\nI0514 21:42:45.505701 1669 log.go:172] (0xc0005a1e00) (5) Data frame handling\nI0514 21:42:45.505720 1669 log.go:172] (0xc0005a1e00) (5) Data frame sent\nI0514 21:42:45.505738 1669 log.go:172] (0xc000a06000) Data frame received for 5\nI0514 21:42:45.505757 1669 log.go:172] (0xc0005a1e00) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 21:42:45.507320 1669 log.go:172] (0xc000a06000) Data frame received for 1\nI0514 21:42:45.507365 1669 log.go:172] (0xc000a4c000) (1) Data frame handling\nI0514 21:42:45.507382 1669 log.go:172] (0xc000a4c000) (1) Data frame sent\nI0514 21:42:45.507405 1669 log.go:172] (0xc000a06000) (0xc000a4c000) Stream removed, broadcasting: 1\nI0514 21:42:45.507468 1669 log.go:172] (0xc000a06000) Go away received\nI0514 21:42:45.507728 1669 log.go:172] (0xc000a06000) (0xc000a4c000) Stream removed, broadcasting: 1\nI0514 21:42:45.507748 1669 log.go:172] (0xc000a06000) (0xc000a72640) Stream removed, broadcasting: 3\nI0514 21:42:45.507757 1669 log.go:172] (0xc000a06000) (0xc0005a1e00) Stream removed, broadcasting: 5\n" May 14 21:42:45.511: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 21:42:45.511: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 14 21:42:45.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9543 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 21:42:45.720: INFO: stderr: "I0514 21:42:45.635850 1688 log.go:172] (0xc0000ecf20) (0xc0005e3d60) Create stream\nI0514 21:42:45.635893 1688 log.go:172] (0xc0000ecf20) (0xc0005e3d60) Stream added, broadcasting: 1\nI0514 21:42:45.637949 1688 log.go:172] (0xc0000ecf20) Reply frame received for 1\nI0514 21:42:45.637979 1688 log.go:172] (0xc0000ecf20) (0xc000668000) Create stream\nI0514 21:42:45.637986 1688 log.go:172] (0xc0000ecf20) (0xc000668000) Stream added, broadcasting: 3\nI0514 21:42:45.638598 1688 log.go:172] (0xc0000ecf20) Reply frame received for 3\nI0514 21:42:45.638625 1688 log.go:172] (0xc0000ecf20) (0xc0005e3e00) Create stream\nI0514 21:42:45.638633 1688 log.go:172] (0xc0000ecf20) (0xc0005e3e00) Stream added, broadcasting: 5\nI0514 21:42:45.639151 1688 log.go:172] (0xc0000ecf20) Reply frame received for 5\nI0514 21:42:45.685277 1688 log.go:172] (0xc0000ecf20) Data frame received for 5\nI0514 21:42:45.685298 1688 log.go:172] (0xc0005e3e00) (5) Data frame handling\nI0514 21:42:45.685305 1688 log.go:172] (0xc0005e3e00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 21:42:45.714195 1688 log.go:172] (0xc0000ecf20) Data frame received for 3\nI0514 21:42:45.714261 1688 log.go:172] (0xc000668000) (3) Data frame handling\nI0514 21:42:45.714279 1688 log.go:172] (0xc000668000) (3) Data frame sent\nI0514 21:42:45.714286 1688 log.go:172] (0xc0000ecf20) Data frame received for 3\nI0514 21:42:45.714292 1688 log.go:172] (0xc000668000) (3) Data frame handling\nI0514 21:42:45.714304 1688 log.go:172] (0xc0000ecf20) Data frame received for 5\nI0514 21:42:45.714311 1688 log.go:172] (0xc0005e3e00) (5) Data frame handling\nI0514 21:42:45.715656 1688 log.go:172] (0xc0000ecf20) Data frame received for 1\nI0514 21:42:45.715672 1688 log.go:172] (0xc0005e3d60) (1) Data frame handling\nI0514 21:42:45.715681 1688 log.go:172] (0xc0005e3d60) (1) Data frame sent\nI0514 21:42:45.715691 1688 log.go:172] (0xc0000ecf20) (0xc0005e3d60) Stream removed, broadcasting: 1\nI0514 21:42:45.715704 1688 log.go:172] (0xc0000ecf20) Go away received\nI0514 21:42:45.716038 1688 log.go:172] (0xc0000ecf20) (0xc0005e3d60) Stream removed, broadcasting: 1\nI0514 21:42:45.716074 1688 log.go:172] (0xc0000ecf20) (0xc000668000) Stream removed, broadcasting: 3\nI0514 21:42:45.716095 1688 log.go:172] (0xc0000ecf20) (0xc0005e3e00) Stream removed, broadcasting: 5\n" May 14 21:42:45.720: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 21:42:45.720: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 14 21:42:45.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9543 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 21:42:45.967: INFO: stderr: "I0514 21:42:45.836364 1709 log.go:172] (0xc0006220b0) (0xc0007f6000) Create stream\nI0514 21:42:45.836420 1709 log.go:172] (0xc0006220b0) (0xc0007f6000) Stream added, broadcasting: 1\nI0514 21:42:45.838889 1709 log.go:172] (0xc0006220b0) Reply frame received for 1\nI0514 21:42:45.838920 1709 log.go:172] (0xc0006220b0) (0xc0007f60a0) Create stream\nI0514 21:42:45.838942 1709 log.go:172] (0xc0006220b0) (0xc0007f60a0) Stream added, broadcasting: 3\nI0514 21:42:45.839692 1709 log.go:172] (0xc0006220b0) Reply frame received for 3\nI0514 21:42:45.839738 1709 log.go:172] (0xc0006220b0) (0xc000840000) Create stream\nI0514 21:42:45.839754 1709 log.go:172] (0xc0006220b0) (0xc000840000) Stream added, broadcasting: 5\nI0514 21:42:45.840714 1709 log.go:172] (0xc0006220b0) Reply frame received for 5\nI0514 21:42:45.911594 1709 log.go:172] (0xc0006220b0) Data frame received for 5\nI0514 21:42:45.911626 1709 log.go:172] (0xc000840000) (5) Data frame handling\nI0514 21:42:45.911652 1709 log.go:172] (0xc000840000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 21:42:45.963330 1709 log.go:172] (0xc0006220b0) Data frame received for 5\nI0514 21:42:45.963536 1709 log.go:172] (0xc000840000) (5) Data frame handling\nI0514 21:42:45.963566 1709 log.go:172] (0xc0006220b0) Data frame received for 3\nI0514 21:42:45.963595 1709 log.go:172] (0xc0007f60a0) (3) Data frame handling\nI0514 21:42:45.963609 1709 log.go:172] (0xc0007f60a0) (3) Data frame sent\nI0514 21:42:45.963631 1709 log.go:172] (0xc0006220b0) Data frame received for 3\nI0514 21:42:45.963641 1709 log.go:172] (0xc0007f60a0) (3) Data frame handling\nI0514 21:42:45.963684 1709 log.go:172] (0xc0006220b0) Data frame received for 1\nI0514 21:42:45.963699 1709 log.go:172] (0xc0007f6000) (1) Data frame handling\nI0514 21:42:45.963732 1709 log.go:172] (0xc0007f6000) (1) Data frame sent\nI0514 21:42:45.963774 1709 log.go:172] (0xc0006220b0) (0xc0007f6000) Stream removed, broadcasting: 1\nI0514 21:42:45.963806 1709 log.go:172] (0xc0006220b0) Go away received\nI0514 21:42:45.963947 1709 log.go:172] (0xc0006220b0) (0xc0007f6000) Stream removed, broadcasting: 1\nI0514 21:42:45.963961 1709 log.go:172] (0xc0006220b0) (0xc0007f60a0) Stream removed, broadcasting: 3\nI0514 21:42:45.963970 1709 log.go:172] (0xc0006220b0) (0xc000840000) Stream removed, broadcasting: 5\n" May 14 21:42:45.967: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 21:42:45.967: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 14 21:42:45.967: INFO: Waiting for statefulset status.replicas updated to 0 May 14 21:42:45.970: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 14 21:42:55.978: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 14 21:42:55.978: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 14 21:42:55.978: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 14 21:42:56.108: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999541s May 14 21:42:57.112: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.875840725s May 14 21:42:58.115: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.871732508s May 14 21:42:59.118: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.868199521s May 14 21:43:00.137: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.865149145s May 14 21:43:01.141: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.846640541s May 14 21:43:02.146: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.84215751s May 14 21:43:03.164: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.837329201s May 14 21:43:04.175: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.819940649s May 14 21:43:05.180: INFO: Verifying statefulset ss doesn't scale past 3 for another 808.853783ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9543 May 14 21:43:06.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9543 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:43:06.416: INFO: stderr: "I0514 21:43:06.322424 1726 log.go:172] (0xc000948000) (0xc0007c8000) Create stream\nI0514 21:43:06.322492 1726 log.go:172] (0xc000948000) (0xc0007c8000) Stream added, broadcasting: 1\nI0514 21:43:06.324714 1726 log.go:172] (0xc000948000) Reply frame received for 1\nI0514 21:43:06.324761 1726 log.go:172] (0xc000948000) (0xc00082a000) Create stream\nI0514 21:43:06.324775 1726 log.go:172] (0xc000948000) (0xc00082a000) Stream added, broadcasting: 3\nI0514 21:43:06.325807 1726 log.go:172] (0xc000948000) Reply frame received for 3\nI0514 21:43:06.325854 1726 log.go:172] (0xc000948000) (0xc00056e320) Create stream\nI0514 21:43:06.325876 1726 log.go:172] (0xc000948000) (0xc00056e320) Stream added, broadcasting: 5\nI0514 21:43:06.326669 1726 log.go:172] (0xc000948000) Reply frame received for 5\nI0514 21:43:06.409818 1726 log.go:172] (0xc000948000) Data frame received for 3\nI0514 21:43:06.409854 1726 log.go:172] (0xc00082a000) (3) Data frame handling\nI0514 21:43:06.409876 1726 log.go:172] (0xc00082a000) (3) Data frame sent\nI0514 21:43:06.409888 1726 log.go:172] (0xc000948000) Data frame received for 3\nI0514 21:43:06.409897 1726 log.go:172] (0xc00082a000) (3) Data frame handling\nI0514 21:43:06.409918 1726 log.go:172] (0xc000948000) Data frame received for 5\nI0514 21:43:06.409943 1726 log.go:172] (0xc00056e320) (5) Data frame handling\nI0514 21:43:06.409970 1726 log.go:172] (0xc00056e320) (5) Data frame sent\nI0514 21:43:06.409984 1726 log.go:172] (0xc000948000) Data frame received for 5\nI0514 21:43:06.409996 1726 log.go:172] (0xc00056e320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0514 21:43:06.411252 1726 log.go:172] (0xc000948000) Data frame received for 1\nI0514 21:43:06.411277 1726 log.go:172] (0xc0007c8000) (1) Data frame handling\nI0514 21:43:06.411295 1726 log.go:172] (0xc0007c8000) (1) Data frame sent\nI0514 21:43:06.411454 1726 log.go:172] (0xc000948000) (0xc0007c8000) Stream removed, broadcasting: 1\nI0514 21:43:06.411522 1726 log.go:172] (0xc000948000) Go away received\nI0514 21:43:06.411797 1726 log.go:172] (0xc000948000) (0xc0007c8000) Stream removed, broadcasting: 1\nI0514 21:43:06.411817 1726 log.go:172] (0xc000948000) (0xc00082a000) Stream removed, broadcasting: 3\nI0514 21:43:06.411827 1726 log.go:172] (0xc000948000) (0xc00056e320) Stream removed, broadcasting: 5\n" May 14 21:43:06.416: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 14 21:43:06.416: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 14 21:43:06.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9543 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:43:06.604: INFO: stderr: "I0514 21:43:06.540971 1744 log.go:172] (0xc000a66790) (0xc00065b9a0) Create stream\nI0514 21:43:06.541019 1744 log.go:172] (0xc000a66790) (0xc00065b9a0) Stream added, broadcasting: 1\nI0514 21:43:06.542961 1744 log.go:172] (0xc000a66790) Reply frame received for 1\nI0514 21:43:06.542994 1744 log.go:172] (0xc000a66790) (0xc0009700a0) Create stream\nI0514 21:43:06.543003 1744 log.go:172] (0xc000a66790) (0xc0009700a0) Stream added, broadcasting: 3\nI0514 21:43:06.543797 1744 log.go:172] (0xc000a66790) Reply frame received for 3\nI0514 21:43:06.543847 1744 log.go:172] (0xc000a66790) (0xc00020a000) Create stream\nI0514 21:43:06.543861 1744 log.go:172] (0xc000a66790) (0xc00020a000) Stream added, broadcasting: 5\nI0514 21:43:06.544584 1744 log.go:172] (0xc000a66790) Reply frame received for 5\nI0514 21:43:06.598112 1744 log.go:172] (0xc000a66790) Data frame received for 5\nI0514 21:43:06.598145 1744 log.go:172] (0xc00020a000) (5) Data frame handling\nI0514 21:43:06.598168 1744 log.go:172] (0xc000a66790) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0514 21:43:06.598194 1744 log.go:172] (0xc0009700a0) (3) Data frame handling\nI0514 21:43:06.598205 1744 log.go:172] (0xc0009700a0) (3) Data frame sent\nI0514 21:43:06.598212 1744 log.go:172] (0xc000a66790) Data frame received for 3\nI0514 21:43:06.598218 1744 log.go:172] (0xc0009700a0) (3) Data frame handling\nI0514 21:43:06.598243 1744 log.go:172] (0xc00020a000) (5) Data frame sent\nI0514 21:43:06.598250 1744 log.go:172] (0xc000a66790) Data frame received for 5\nI0514 21:43:06.598258 1744 log.go:172] (0xc00020a000) (5) Data frame handling\nI0514 21:43:06.599608 1744 log.go:172] (0xc000a66790) Data frame received for 1\nI0514 21:43:06.599658 1744 log.go:172] (0xc00065b9a0) (1) Data frame handling\nI0514 21:43:06.599704 1744 log.go:172] (0xc00065b9a0) (1) Data frame sent\nI0514 21:43:06.599721 1744 log.go:172] (0xc000a66790) (0xc00065b9a0) Stream removed, broadcasting: 1\nI0514 21:43:06.599939 1744 log.go:172] (0xc000a66790) Go away received\nI0514 21:43:06.600186 1744 log.go:172] (0xc000a66790) (0xc00065b9a0) Stream removed, broadcasting: 1\nI0514 21:43:06.600205 1744 log.go:172] (0xc000a66790) (0xc0009700a0) Stream removed, broadcasting: 3\nI0514 21:43:06.600218 1744 log.go:172] (0xc000a66790) (0xc00020a000) Stream removed, broadcasting: 5\n" May 14 21:43:06.604: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 14 21:43:06.604: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 14 21:43:06.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9543 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:43:06.851: INFO: stderr: "I0514 21:43:06.768398 1767 log.go:172] (0xc0009e2630) (0xc000a161e0) Create stream\nI0514 21:43:06.768462 1767 log.go:172] (0xc0009e2630) (0xc000a161e0) Stream added, broadcasting: 1\nI0514 21:43:06.771203 1767 log.go:172] (0xc0009e2630) Reply frame received for 1\nI0514 21:43:06.771243 1767 log.go:172] (0xc0009e2630) (0xc000665c20) Create stream\nI0514 21:43:06.771253 1767 log.go:172] (0xc0009e2630) (0xc000665c20) Stream added, broadcasting: 3\nI0514 21:43:06.772267 1767 log.go:172] (0xc0009e2630) Reply frame received for 3\nI0514 21:43:06.772314 1767 log.go:172] (0xc0009e2630) (0xc000a16280) Create stream\nI0514 21:43:06.772325 1767 log.go:172] (0xc0009e2630) (0xc000a16280) Stream added, broadcasting: 5\nI0514 21:43:06.773286 1767 log.go:172] (0xc0009e2630) Reply frame received for 5\nI0514 21:43:06.844575 1767 log.go:172] (0xc0009e2630) Data frame received for 5\nI0514 21:43:06.844633 1767 log.go:172] (0xc0009e2630) Data frame received for 3\nI0514 21:43:06.844678 1767 log.go:172] (0xc000665c20) (3) Data frame handling\nI0514 21:43:06.844701 1767 log.go:172] (0xc000665c20) (3) Data frame sent\nI0514 21:43:06.844719 1767 log.go:172] (0xc0009e2630) Data frame received for 3\nI0514 21:43:06.844737 1767 log.go:172] (0xc000665c20) (3) Data frame handling\nI0514 21:43:06.844760 1767 log.go:172] (0xc000a16280) (5) Data frame handling\nI0514 21:43:06.844794 1767 log.go:172] (0xc000a16280) (5) Data frame sent\nI0514 21:43:06.844809 1767 log.go:172] (0xc0009e2630) Data frame received for 5\nI0514 21:43:06.844820 1767 log.go:172] (0xc000a16280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0514 21:43:06.846385 1767 log.go:172] (0xc0009e2630) Data frame received for 1\nI0514 21:43:06.846422 1767 log.go:172] (0xc000a161e0) (1) Data frame handling\nI0514 21:43:06.846454 1767 log.go:172] (0xc000a161e0) (1) Data frame sent\nI0514 21:43:06.846486 1767 log.go:172] (0xc0009e2630) (0xc000a161e0) Stream removed, broadcasting: 1\nI0514 21:43:06.846516 1767 log.go:172] (0xc0009e2630) Go away received\nI0514 21:43:06.846861 1767 log.go:172] (0xc0009e2630) (0xc000a161e0) Stream removed, broadcasting: 1\nI0514 21:43:06.846885 1767 log.go:172] (0xc0009e2630) (0xc000665c20) Stream removed, broadcasting: 3\nI0514 21:43:06.846909 1767 log.go:172] (0xc0009e2630) (0xc000a16280) Stream removed, broadcasting: 5\n" May 14 21:43:06.851: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 14 21:43:06.851: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 14 21:43:06.851: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 14 21:43:26.877: INFO: Deleting all statefulset in ns statefulset-9543 May 14 21:43:26.880: INFO: Scaling statefulset ss to 0 May 14 21:43:26.889: INFO: Waiting for statefulset status.replicas updated to 0 May 14 21:43:26.891: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:43:26.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9543" for this suite. • [SLOW TEST:82.615 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":104,"skipped":1752,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:43:26.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 14 21:43:26.980: INFO: Waiting up to 5m0s for pod "client-containers-4420e337-aff4-4c17-b96b-17d71c7ff84d" in namespace "containers-1693" to be "success or failure" May 14 21:43:26.983: INFO: Pod "client-containers-4420e337-aff4-4c17-b96b-17d71c7ff84d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.620853ms May 14 21:43:29.055: INFO: Pod "client-containers-4420e337-aff4-4c17-b96b-17d71c7ff84d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07531491s May 14 21:43:31.060: INFO: Pod "client-containers-4420e337-aff4-4c17-b96b-17d71c7ff84d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079700448s STEP: Saw pod success May 14 21:43:31.060: INFO: Pod "client-containers-4420e337-aff4-4c17-b96b-17d71c7ff84d" satisfied condition "success or failure" May 14 21:43:31.063: INFO: Trying to get logs from node jerma-worker2 pod client-containers-4420e337-aff4-4c17-b96b-17d71c7ff84d container test-container: STEP: delete the pod May 14 21:43:31.090: INFO: Waiting for pod client-containers-4420e337-aff4-4c17-b96b-17d71c7ff84d to disappear May 14 21:43:31.094: INFO: Pod client-containers-4420e337-aff4-4c17-b96b-17d71c7ff84d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:43:31.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1693" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1783,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:43:31.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-9273/secret-test-78fcafdf-d7f6-4575-b2aa-507230495c24 STEP: Creating a pod to test consume secrets May 14 21:43:31.198: INFO: Waiting up to 5m0s for pod "pod-configmaps-108f0fc7-934c-4032-a658-44cb17bcf9fe" in namespace "secrets-9273" to be "success or failure" May 14 21:43:31.267: INFO: Pod "pod-configmaps-108f0fc7-934c-4032-a658-44cb17bcf9fe": Phase="Pending", Reason="", readiness=false. Elapsed: 68.235524ms May 14 21:43:33.553: INFO: Pod "pod-configmaps-108f0fc7-934c-4032-a658-44cb17bcf9fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.354658068s May 14 21:43:35.558: INFO: Pod "pod-configmaps-108f0fc7-934c-4032-a658-44cb17bcf9fe": Phase="Running", Reason="", readiness=true. Elapsed: 4.359440212s May 14 21:43:37.561: INFO: Pod "pod-configmaps-108f0fc7-934c-4032-a658-44cb17bcf9fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.362771425s STEP: Saw pod success May 14 21:43:37.561: INFO: Pod "pod-configmaps-108f0fc7-934c-4032-a658-44cb17bcf9fe" satisfied condition "success or failure" May 14 21:43:37.564: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-108f0fc7-934c-4032-a658-44cb17bcf9fe container env-test: STEP: delete the pod May 14 21:43:37.593: INFO: Waiting for pod pod-configmaps-108f0fc7-934c-4032-a658-44cb17bcf9fe to disappear May 14 21:43:37.696: INFO: Pod pod-configmaps-108f0fc7-934c-4032-a658-44cb17bcf9fe no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:43:37.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9273" for this suite. • [SLOW TEST:6.603 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1804,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:43:37.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4733 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 14 21:43:37.993: INFO: Found 0 stateful pods, waiting for 3 May 14 21:43:47.996: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 21:43:47.997: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 21:43:47.997: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 14 21:43:57.999: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 21:43:57.999: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 21:43:57.999: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 14 21:43:58.062: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 14 21:44:08.117: INFO: Updating stateful set ss2 May 14 21:44:08.150: INFO: Waiting for Pod statefulset-4733/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 14 21:44:18.162: INFO: Waiting for Pod statefulset-4733/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 14 21:44:28.457: INFO: Found 2 stateful pods, waiting for 3 May 14 21:44:38.463: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 21:44:38.463: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 21:44:38.463: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 14 21:44:38.487: INFO: Updating stateful set ss2 May 14 21:44:38.523: INFO: Waiting for Pod statefulset-4733/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 14 21:44:48.530: INFO: Waiting for Pod statefulset-4733/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 14 21:44:58.548: INFO: Updating stateful set ss2 May 14 21:44:58.710: INFO: Waiting for StatefulSet statefulset-4733/ss2 to complete update May 14 21:44:58.710: INFO: Waiting for Pod statefulset-4733/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 14 21:45:08.722: INFO: Deleting all statefulset in ns statefulset-4733 May 14 21:45:08.724: INFO: Scaling statefulset ss2 to 0 May 14 21:45:28.750: INFO: Waiting for statefulset status.replicas updated to 0 May 14 21:45:28.753: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:45:28.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4733" for this suite. • [SLOW TEST:111.073 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":107,"skipped":1808,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:45:28.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-b2d46895-aa61-4d9b-afc1-8dd95fe43290 STEP: Creating a pod to test consume configMaps May 14 21:45:28.932: INFO: Waiting up to 5m0s for pod "pod-configmaps-4d4b7c9d-f6a2-4ea0-921d-bda6acc7b0d9" in namespace "configmap-6829" to be "success or failure" May 14 21:45:28.936: INFO: Pod "pod-configmaps-4d4b7c9d-f6a2-4ea0-921d-bda6acc7b0d9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.96456ms May 14 21:45:30.941: INFO: Pod "pod-configmaps-4d4b7c9d-f6a2-4ea0-921d-bda6acc7b0d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008073192s May 14 21:45:32.945: INFO: Pod "pod-configmaps-4d4b7c9d-f6a2-4ea0-921d-bda6acc7b0d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012802469s STEP: Saw pod success May 14 21:45:32.945: INFO: Pod "pod-configmaps-4d4b7c9d-f6a2-4ea0-921d-bda6acc7b0d9" satisfied condition "success or failure" May 14 21:45:32.948: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-4d4b7c9d-f6a2-4ea0-921d-bda6acc7b0d9 container configmap-volume-test: STEP: delete the pod May 14 21:45:33.076: INFO: Waiting for pod pod-configmaps-4d4b7c9d-f6a2-4ea0-921d-bda6acc7b0d9 to disappear May 14 21:45:33.081: INFO: Pod pod-configmaps-4d4b7c9d-f6a2-4ea0-921d-bda6acc7b0d9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:45:33.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6829" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1816,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:45:33.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 14 21:45:37.249: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 14 21:45:52.354: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:45:52.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3113" for this suite. • [SLOW TEST:19.276 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":109,"skipped":1843,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:45:52.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 14 21:45:52.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9142' May 14 21:45:52.905: INFO: stderr: "" May 14 21:45:52.905: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 21:45:52.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9142' May 14 21:45:53.035: INFO: stderr: "" May 14 21:45:53.036: INFO: stdout: "update-demo-nautilus-j4p7j update-demo-nautilus-jf45j " May 14 21:45:53.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j4p7j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9142' May 14 21:45:53.135: INFO: stderr: "" May 14 21:45:53.135: INFO: stdout: "" May 14 21:45:53.135: INFO: update-demo-nautilus-j4p7j is created but not running May 14 21:45:58.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9142' May 14 21:45:58.227: INFO: stderr: "" May 14 21:45:58.227: INFO: stdout: "update-demo-nautilus-j4p7j update-demo-nautilus-jf45j " May 14 21:45:58.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j4p7j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9142' May 14 21:45:58.339: INFO: stderr: "" May 14 21:45:58.339: INFO: stdout: "true" May 14 21:45:58.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j4p7j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9142' May 14 21:45:58.437: INFO: stderr: "" May 14 21:45:58.437: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 21:45:58.437: INFO: validating pod update-demo-nautilus-j4p7j May 14 21:45:58.441: INFO: got data: { "image": "nautilus.jpg" } May 14 21:45:58.441: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 21:45:58.441: INFO: update-demo-nautilus-j4p7j is verified up and running May 14 21:45:58.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jf45j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9142' May 14 21:45:58.546: INFO: stderr: "" May 14 21:45:58.546: INFO: stdout: "true" May 14 21:45:58.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jf45j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9142' May 14 21:45:58.643: INFO: stderr: "" May 14 21:45:58.643: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 21:45:58.643: INFO: validating pod update-demo-nautilus-jf45j May 14 21:45:58.647: INFO: got data: { "image": "nautilus.jpg" } May 14 21:45:58.647: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 21:45:58.647: INFO: update-demo-nautilus-jf45j is verified up and running STEP: rolling-update to new replication controller May 14 21:45:58.649: INFO: scanned /root for discovery docs: May 14 21:45:58.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9142' May 14 21:46:21.270: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 14 21:46:21.270: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 21:46:21.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9142' May 14 21:46:21.363: INFO: stderr: "" May 14 21:46:21.363: INFO: stdout: "update-demo-kitten-94bxj update-demo-kitten-cgh2b " May 14 21:46:21.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-94bxj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9142' May 14 21:46:21.450: INFO: stderr: "" May 14 21:46:21.450: INFO: stdout: "true" May 14 21:46:21.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-94bxj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9142' May 14 21:46:21.551: INFO: stderr: "" May 14 21:46:21.551: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 14 21:46:21.551: INFO: validating pod update-demo-kitten-94bxj May 14 21:46:21.559: INFO: got data: { "image": "kitten.jpg" } May 14 21:46:21.559: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 14 21:46:21.559: INFO: update-demo-kitten-94bxj is verified up and running May 14 21:46:21.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-cgh2b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9142' May 14 21:46:21.661: INFO: stderr: "" May 14 21:46:21.662: INFO: stdout: "true" May 14 21:46:21.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-cgh2b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9142' May 14 21:46:21.766: INFO: stderr: "" May 14 21:46:21.766: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 14 21:46:21.766: INFO: validating pod update-demo-kitten-cgh2b May 14 21:46:21.771: INFO: got data: { "image": "kitten.jpg" } May 14 21:46:21.771: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 14 21:46:21.771: INFO: update-demo-kitten-cgh2b is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:46:21.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9142" for this suite. • [SLOW TEST:29.410 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":110,"skipped":1893,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:46:21.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:46:21.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3544" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":111,"skipped":1928,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:46:21.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 14 21:46:21.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9442 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 14 21:46:25.884: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0514 21:46:25.793652 2098 log.go:172] (0xc000b28a50) (0xc00062dcc0) Create stream\nI0514 21:46:25.793721 2098 log.go:172] (0xc000b28a50) (0xc00062dcc0) Stream added, broadcasting: 1\nI0514 21:46:25.796633 2098 log.go:172] (0xc000b28a50) Reply frame received for 1\nI0514 21:46:25.796696 2098 log.go:172] (0xc000b28a50) (0xc00062dd60) Create stream\nI0514 21:46:25.796714 2098 log.go:172] (0xc000b28a50) (0xc00062dd60) Stream added, broadcasting: 3\nI0514 21:46:25.798038 2098 log.go:172] (0xc000b28a50) Reply frame received for 3\nI0514 21:46:25.798096 2098 log.go:172] (0xc000b28a50) (0xc000884000) Create stream\nI0514 21:46:25.798116 2098 log.go:172] (0xc000b28a50) (0xc000884000) Stream added, broadcasting: 5\nI0514 21:46:25.799293 2098 log.go:172] (0xc000b28a50) Reply frame received for 5\nI0514 21:46:25.799358 2098 log.go:172] (0xc000b28a50) (0xc0008d0000) Create stream\nI0514 21:46:25.799402 2098 log.go:172] (0xc000b28a50) (0xc0008d0000) Stream added, broadcasting: 7\nI0514 21:46:25.800608 2098 log.go:172] (0xc000b28a50) Reply frame received for 7\nI0514 21:46:25.800834 2098 log.go:172] (0xc00062dd60) (3) Writing data frame\nI0514 21:46:25.800950 2098 log.go:172] (0xc00062dd60) (3) Writing data frame\nI0514 21:46:25.802338 2098 log.go:172] (0xc000b28a50) Data frame received for 5\nI0514 21:46:25.802365 2098 log.go:172] (0xc000884000) (5) Data frame handling\nI0514 21:46:25.802399 2098 log.go:172] (0xc000884000) (5) Data frame sent\nI0514 21:46:25.802806 2098 log.go:172] (0xc000b28a50) Data frame received for 5\nI0514 21:46:25.802835 2098 log.go:172] (0xc000884000) (5) Data frame handling\nI0514 21:46:25.802851 2098 log.go:172] (0xc000884000) (5) Data frame sent\nI0514 21:46:25.856143 2098 log.go:172] (0xc000b28a50) Data frame received for 5\nI0514 21:46:25.856164 2098 log.go:172] (0xc000884000) (5) Data frame handling\nI0514 21:46:25.856187 2098 log.go:172] (0xc000b28a50) Data frame received for 7\nI0514 21:46:25.856194 2098 log.go:172] (0xc0008d0000) (7) Data frame handling\nI0514 21:46:25.856662 2098 log.go:172] (0xc000b28a50) Data frame received for 1\nI0514 21:46:25.856687 2098 log.go:172] (0xc00062dcc0) (1) Data frame handling\nI0514 21:46:25.856716 2098 log.go:172] (0xc00062dcc0) (1) Data frame sent\nI0514 21:46:25.856741 2098 log.go:172] (0xc000b28a50) (0xc00062dcc0) Stream removed, broadcasting: 1\nI0514 21:46:25.856790 2098 log.go:172] (0xc000b28a50) (0xc00062dd60) Stream removed, broadcasting: 3\nI0514 21:46:25.856844 2098 log.go:172] (0xc000b28a50) Go away received\nI0514 21:46:25.857481 2098 log.go:172] (0xc000b28a50) (0xc00062dcc0) Stream removed, broadcasting: 1\nI0514 21:46:25.857514 2098 log.go:172] (0xc000b28a50) (0xc00062dd60) Stream removed, broadcasting: 3\nI0514 21:46:25.857528 2098 log.go:172] (0xc000b28a50) (0xc000884000) Stream removed, broadcasting: 5\nI0514 21:46:25.857539 2098 log.go:172] (0xc000b28a50) (0xc0008d0000) Stream removed, broadcasting: 7\n" May 14 21:46:25.884: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:46:27.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9442" for this suite. • [SLOW TEST:6.175 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":112,"skipped":1944,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:46:28.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-cfdef289-c8c9-4bdd-9ac1-507f1bf2d23d STEP: Creating a pod to test consume secrets May 14 21:46:28.731: INFO: Waiting up to 5m0s for pod "pod-secrets-c94b7456-3860-4391-b1ce-8d82fa62a9db" in namespace "secrets-5557" to be "success or failure" May 14 21:46:28.774: INFO: Pod "pod-secrets-c94b7456-3860-4391-b1ce-8d82fa62a9db": Phase="Pending", Reason="", readiness=false. Elapsed: 42.283438ms May 14 21:46:30.779: INFO: Pod "pod-secrets-c94b7456-3860-4391-b1ce-8d82fa62a9db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047236861s May 14 21:46:32.783: INFO: Pod "pod-secrets-c94b7456-3860-4391-b1ce-8d82fa62a9db": Phase="Running", Reason="", readiness=true. Elapsed: 4.05129341s May 14 21:46:34.787: INFO: Pod "pod-secrets-c94b7456-3860-4391-b1ce-8d82fa62a9db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055898789s STEP: Saw pod success May 14 21:46:34.787: INFO: Pod "pod-secrets-c94b7456-3860-4391-b1ce-8d82fa62a9db" satisfied condition "success or failure" May 14 21:46:34.791: INFO: Trying to get logs from node jerma-worker pod pod-secrets-c94b7456-3860-4391-b1ce-8d82fa62a9db container secret-volume-test: STEP: delete the pod May 14 21:46:34.840: INFO: Waiting for pod pod-secrets-c94b7456-3860-4391-b1ce-8d82fa62a9db to disappear May 14 21:46:34.849: INFO: Pod pod-secrets-c94b7456-3860-4391-b1ce-8d82fa62a9db no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:46:34.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5557" for this suite. • [SLOW TEST:6.808 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1953,"failed":0} [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:46:34.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 14 21:46:34.946: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ba508b6-045c-48d1-9ef7-6dece85bd29d" in namespace "downward-api-1404" to be "success or failure" May 14 21:46:34.950: INFO: Pod "downwardapi-volume-1ba508b6-045c-48d1-9ef7-6dece85bd29d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.288244ms May 14 21:46:36.953: INFO: Pod "downwardapi-volume-1ba508b6-045c-48d1-9ef7-6dece85bd29d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007320846s May 14 21:46:38.958: INFO: Pod "downwardapi-volume-1ba508b6-045c-48d1-9ef7-6dece85bd29d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011517268s STEP: Saw pod success May 14 21:46:38.958: INFO: Pod "downwardapi-volume-1ba508b6-045c-48d1-9ef7-6dece85bd29d" satisfied condition "success or failure" May 14 21:46:38.961: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1ba508b6-045c-48d1-9ef7-6dece85bd29d container client-container: STEP: delete the pod May 14 21:46:39.367: INFO: Waiting for pod downwardapi-volume-1ba508b6-045c-48d1-9ef7-6dece85bd29d to disappear May 14 21:46:39.370: INFO: Pod downwardapi-volume-1ba508b6-045c-48d1-9ef7-6dece85bd29d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:46:39.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1404" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1953,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:46:39.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 21:46:40.282: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 21:46:42.292: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089600, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089600, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089600, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089600, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 21:46:44.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089600, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089600, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089600, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089600, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 21:46:47.332: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 14 21:46:51.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-3639 to-be-attached-pod -i -c=container1' May 14 21:46:51.497: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:46:51.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3639" for this suite. STEP: Destroying namespace "webhook-3639-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.441 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":115,"skipped":1955,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:46:51.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-lfmg STEP: Creating a pod to test atomic-volume-subpath May 14 21:46:51.955: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lfmg" in namespace "subpath-4395" to be "success or failure" May 14 21:46:51.983: INFO: Pod "pod-subpath-test-configmap-lfmg": Phase="Pending", Reason="", readiness=false. Elapsed: 28.067468ms May 14 21:46:54.088: INFO: Pod "pod-subpath-test-configmap-lfmg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133145365s May 14 21:46:56.091: INFO: Pod "pod-subpath-test-configmap-lfmg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136671932s May 14 21:46:58.286: INFO: Pod "pod-subpath-test-configmap-lfmg": Phase="Running", Reason="", readiness=true. Elapsed: 6.331471789s May 14 21:47:00.303: INFO: Pod "pod-subpath-test-configmap-lfmg": Phase="Running", Reason="", readiness=true. Elapsed: 8.348705317s May 14 21:47:02.308: INFO: Pod "pod-subpath-test-configmap-lfmg": Phase="Running", Reason="", readiness=true. Elapsed: 10.353409697s May 14 21:47:04.315: INFO: Pod "pod-subpath-test-configmap-lfmg": Phase="Running", Reason="", readiness=true. Elapsed: 12.360710186s May 14 21:47:06.319: INFO: Pod "pod-subpath-test-configmap-lfmg": Phase="Running", Reason="", readiness=true. Elapsed: 14.364619787s May 14 21:47:08.323: INFO: Pod "pod-subpath-test-configmap-lfmg": Phase="Running", Reason="", readiness=true. Elapsed: 16.368534794s May 14 21:47:10.328: INFO: Pod "pod-subpath-test-configmap-lfmg": Phase="Running", Reason="", readiness=true. Elapsed: 18.37303213s May 14 21:47:12.332: INFO: Pod "pod-subpath-test-configmap-lfmg": Phase="Running", Reason="", readiness=true. Elapsed: 20.377659315s May 14 21:47:14.338: INFO: Pod "pod-subpath-test-configmap-lfmg": Phase="Running", Reason="", readiness=true. Elapsed: 22.382856898s May 14 21:47:16.342: INFO: Pod "pod-subpath-test-configmap-lfmg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.386980627s STEP: Saw pod success May 14 21:47:16.342: INFO: Pod "pod-subpath-test-configmap-lfmg" satisfied condition "success or failure" May 14 21:47:16.345: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-lfmg container test-container-subpath-configmap-lfmg: STEP: delete the pod May 14 21:47:16.559: INFO: Waiting for pod pod-subpath-test-configmap-lfmg to disappear May 14 21:47:16.735: INFO: Pod pod-subpath-test-configmap-lfmg no longer exists STEP: Deleting pod pod-subpath-test-configmap-lfmg May 14 21:47:16.735: INFO: Deleting pod "pod-subpath-test-configmap-lfmg" in namespace "subpath-4395" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:47:16.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4395" for this suite. • [SLOW TEST:24.882 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":116,"skipped":1967,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:47:16.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:47:16.907: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 14 21:47:19.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4225 create -f -' May 14 21:47:24.747: INFO: stderr: "" May 14 21:47:24.747: INFO: stdout: "e2e-test-crd-publish-openapi-4039-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 14 21:47:24.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4225 delete e2e-test-crd-publish-openapi-4039-crds test-cr' May 14 21:47:24.867: INFO: stderr: "" May 14 21:47:24.867: INFO: stdout: "e2e-test-crd-publish-openapi-4039-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 14 21:47:24.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4225 apply -f -' May 14 21:47:25.153: INFO: stderr: "" May 14 21:47:25.153: INFO: stdout: "e2e-test-crd-publish-openapi-4039-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 14 21:47:25.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4225 delete e2e-test-crd-publish-openapi-4039-crds test-cr' May 14 21:47:25.261: INFO: stderr: "" May 14 21:47:25.261: INFO: stdout: "e2e-test-crd-publish-openapi-4039-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 14 21:47:25.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4039-crds' May 14 21:47:25.518: INFO: stderr: "" May 14 21:47:25.518: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4039-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:47:28.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4225" for this suite. • [SLOW TEST:11.693 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":117,"skipped":1986,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:47:28.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 14 21:47:28.582: INFO: Waiting up to 5m0s for pod "pod-33c6cb8a-f973-4936-9f16-a4db2f9f33dc" in namespace "emptydir-2296" to be "success or failure" May 14 21:47:28.625: INFO: Pod "pod-33c6cb8a-f973-4936-9f16-a4db2f9f33dc": Phase="Pending", Reason="", readiness=false. Elapsed: 43.494787ms May 14 21:47:30.635: INFO: Pod "pod-33c6cb8a-f973-4936-9f16-a4db2f9f33dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053146927s May 14 21:47:32.638: INFO: Pod "pod-33c6cb8a-f973-4936-9f16-a4db2f9f33dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05662875s STEP: Saw pod success May 14 21:47:32.638: INFO: Pod "pod-33c6cb8a-f973-4936-9f16-a4db2f9f33dc" satisfied condition "success or failure" May 14 21:47:32.640: INFO: Trying to get logs from node jerma-worker pod pod-33c6cb8a-f973-4936-9f16-a4db2f9f33dc container test-container: STEP: delete the pod May 14 21:47:32.766: INFO: Waiting for pod pod-33c6cb8a-f973-4936-9f16-a4db2f9f33dc to disappear May 14 21:47:32.773: INFO: Pod pod-33c6cb8a-f973-4936-9f16-a4db2f9f33dc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:47:32.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2296" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1989,"failed":0} SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:47:32.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 14 21:47:32.827: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 14 21:47:32.857: INFO: Waiting for terminating namespaces to be deleted... May 14 21:47:32.859: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 14 21:47:32.863: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 21:47:32.863: INFO: Container kindnet-cni ready: true, restart count 0 May 14 21:47:32.863: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 21:47:32.863: INFO: Container kube-proxy ready: true, restart count 0 May 14 21:47:32.863: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 14 21:47:32.893: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 21:47:32.893: INFO: Container kindnet-cni ready: true, restart count 0 May 14 21:47:32.893: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 14 21:47:32.893: INFO: Container kube-bench ready: false, restart count 0 May 14 21:47:32.893: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 21:47:32.893: INFO: Container kube-proxy ready: true, restart count 0 May 14 21:47:32.893: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 14 21:47:32.893: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160f035406b877d1], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.160f0354088b85a4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:47:33.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-963" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":119,"skipped":1999,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:47:33.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 14 21:47:33.966: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:47:41.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5113" for this suite. • [SLOW TEST:8.046 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":120,"skipped":2002,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:47:41.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-116a5eed-dc7f-4b48-8179-fca42f403833 STEP: Creating a pod to test consume configMaps May 14 21:47:42.045: INFO: Waiting up to 5m0s for pod "pod-configmaps-f39e8d10-7187-4ad2-b6c1-ec8678105ace" in namespace "configmap-3968" to be "success or failure" May 14 21:47:42.072: INFO: Pod "pod-configmaps-f39e8d10-7187-4ad2-b6c1-ec8678105ace": Phase="Pending", Reason="", readiness=false. Elapsed: 26.69108ms May 14 21:47:44.129: INFO: Pod "pod-configmaps-f39e8d10-7187-4ad2-b6c1-ec8678105ace": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083790229s May 14 21:47:46.133: INFO: Pod "pod-configmaps-f39e8d10-7187-4ad2-b6c1-ec8678105ace": Phase="Running", Reason="", readiness=true. Elapsed: 4.087998526s May 14 21:47:48.275: INFO: Pod "pod-configmaps-f39e8d10-7187-4ad2-b6c1-ec8678105ace": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.229269459s STEP: Saw pod success May 14 21:47:48.275: INFO: Pod "pod-configmaps-f39e8d10-7187-4ad2-b6c1-ec8678105ace" satisfied condition "success or failure" May 14 21:47:48.306: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-f39e8d10-7187-4ad2-b6c1-ec8678105ace container configmap-volume-test: STEP: delete the pod May 14 21:47:48.325: INFO: Waiting for pod pod-configmaps-f39e8d10-7187-4ad2-b6c1-ec8678105ace to disappear May 14 21:47:48.343: INFO: Pod pod-configmaps-f39e8d10-7187-4ad2-b6c1-ec8678105ace no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:47:48.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3968" for this suite. • [SLOW TEST:6.383 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":2007,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:47:48.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 14 21:47:48.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1990' May 14 21:47:48.678: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 14 21:47:48.678: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 14 21:47:48.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-1990' May 14 21:47:49.006: INFO: stderr: "" May 14 21:47:49.006: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:47:49.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1990" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":122,"skipped":2039,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:47:49.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-3a4e4900-bbe7-4932-ae1e-986bc50eafb5 STEP: Creating a pod to test consume secrets May 14 21:47:49.216: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2d3fae7e-f723-4989-8787-c5440e6ad55f" in namespace "projected-766" to be "success or failure" May 14 21:47:49.223: INFO: Pod "pod-projected-secrets-2d3fae7e-f723-4989-8787-c5440e6ad55f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.017088ms May 14 21:47:51.226: INFO: Pod "pod-projected-secrets-2d3fae7e-f723-4989-8787-c5440e6ad55f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010705647s May 14 21:47:53.230: INFO: Pod "pod-projected-secrets-2d3fae7e-f723-4989-8787-c5440e6ad55f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014306592s May 14 21:47:55.234: INFO: Pod "pod-projected-secrets-2d3fae7e-f723-4989-8787-c5440e6ad55f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018279221s STEP: Saw pod success May 14 21:47:55.234: INFO: Pod "pod-projected-secrets-2d3fae7e-f723-4989-8787-c5440e6ad55f" satisfied condition "success or failure" May 14 21:47:55.237: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-2d3fae7e-f723-4989-8787-c5440e6ad55f container projected-secret-volume-test: STEP: delete the pod May 14 21:47:55.255: INFO: Waiting for pod pod-projected-secrets-2d3fae7e-f723-4989-8787-c5440e6ad55f to disappear May 14 21:47:55.259: INFO: Pod pod-projected-secrets-2d3fae7e-f723-4989-8787-c5440e6ad55f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:47:55.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-766" for this suite. • [SLOW TEST:6.253 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2056,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:47:55.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-328 STEP: creating a selector STEP: Creating the service pods in kubernetes May 14 21:47:55.314: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 14 21:48:21.488: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.97:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-328 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 21:48:21.488: INFO: >>> kubeConfig: /root/.kube/config I0514 21:48:21.520187 6 log.go:172] (0xc006604790) (0xc001dfb0e0) Create stream I0514 21:48:21.520234 6 log.go:172] (0xc006604790) (0xc001dfb0e0) Stream added, broadcasting: 1 I0514 21:48:21.522081 6 log.go:172] (0xc006604790) Reply frame received for 1 I0514 21:48:21.522116 6 log.go:172] (0xc006604790) (0xc001dfb220) Create stream I0514 21:48:21.522127 6 log.go:172] (0xc006604790) (0xc001dfb220) Stream added, broadcasting: 3 I0514 21:48:21.522904 6 log.go:172] (0xc006604790) Reply frame received for 3 I0514 21:48:21.522931 6 log.go:172] (0xc006604790) (0xc001dfb4a0) Create stream I0514 21:48:21.522940 6 log.go:172] (0xc006604790) (0xc001dfb4a0) Stream added, broadcasting: 5 I0514 21:48:21.523833 6 log.go:172] (0xc006604790) Reply frame received for 5 I0514 21:48:21.595931 6 log.go:172] (0xc006604790) Data frame received for 3 I0514 21:48:21.595965 6 log.go:172] (0xc001dfb220) (3) Data frame handling I0514 21:48:21.595980 6 log.go:172] (0xc001dfb220) (3) Data frame sent I0514 21:48:21.596133 6 log.go:172] (0xc006604790) Data frame received for 3 I0514 21:48:21.596151 6 log.go:172] (0xc001dfb220) (3) Data frame handling I0514 21:48:21.596694 6 log.go:172] (0xc006604790) Data frame received for 5 I0514 21:48:21.596721 6 log.go:172] (0xc001dfb4a0) (5) Data frame handling I0514 21:48:21.598373 6 log.go:172] (0xc006604790) Data frame received for 1 I0514 21:48:21.598393 6 log.go:172] (0xc001dfb0e0) (1) Data frame handling I0514 21:48:21.598405 6 log.go:172] (0xc001dfb0e0) (1) Data frame sent I0514 21:48:21.598421 6 log.go:172] (0xc006604790) (0xc001dfb0e0) Stream removed, broadcasting: 1 I0514 21:48:21.598437 6 log.go:172] (0xc006604790) Go away received I0514 21:48:21.598525 6 log.go:172] (0xc006604790) (0xc001dfb0e0) Stream removed, broadcasting: 1 I0514 21:48:21.598552 6 log.go:172] (0xc006604790) (0xc001dfb220) Stream removed, broadcasting: 3 I0514 21:48:21.598565 6 log.go:172] (0xc006604790) (0xc001dfb4a0) Stream removed, broadcasting: 5 May 14 21:48:21.598: INFO: Found all expected endpoints: [netserver-0] May 14 21:48:21.606: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.179:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-328 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 21:48:21.606: INFO: >>> kubeConfig: /root/.kube/config I0514 21:48:21.631811 6 log.go:172] (0xc006604dc0) (0xc001dfbae0) Create stream I0514 21:48:21.631839 6 log.go:172] (0xc006604dc0) (0xc001dfbae0) Stream added, broadcasting: 1 I0514 21:48:21.633771 6 log.go:172] (0xc006604dc0) Reply frame received for 1 I0514 21:48:21.633807 6 log.go:172] (0xc006604dc0) (0xc002377d60) Create stream I0514 21:48:21.633821 6 log.go:172] (0xc006604dc0) (0xc002377d60) Stream added, broadcasting: 3 I0514 21:48:21.634559 6 log.go:172] (0xc006604dc0) Reply frame received for 3 I0514 21:48:21.634588 6 log.go:172] (0xc006604dc0) (0xc000dce320) Create stream I0514 21:48:21.634597 6 log.go:172] (0xc006604dc0) (0xc000dce320) Stream added, broadcasting: 5 I0514 21:48:21.635367 6 log.go:172] (0xc006604dc0) Reply frame received for 5 I0514 21:48:21.702277 6 log.go:172] (0xc006604dc0) Data frame received for 5 I0514 21:48:21.702318 6 log.go:172] (0xc000dce320) (5) Data frame handling I0514 21:48:21.702343 6 log.go:172] (0xc006604dc0) Data frame received for 3 I0514 21:48:21.702368 6 log.go:172] (0xc002377d60) (3) Data frame handling I0514 21:48:21.702383 6 log.go:172] (0xc002377d60) (3) Data frame sent I0514 21:48:21.702399 6 log.go:172] (0xc006604dc0) Data frame received for 3 I0514 21:48:21.702410 6 log.go:172] (0xc002377d60) (3) Data frame handling I0514 21:48:21.704362 6 log.go:172] (0xc006604dc0) Data frame received for 1 I0514 21:48:21.704378 6 log.go:172] (0xc001dfbae0) (1) Data frame handling I0514 21:48:21.704385 6 log.go:172] (0xc001dfbae0) (1) Data frame sent I0514 21:48:21.704469 6 log.go:172] (0xc006604dc0) (0xc001dfbae0) Stream removed, broadcasting: 1 I0514 21:48:21.704553 6 log.go:172] (0xc006604dc0) (0xc001dfbae0) Stream removed, broadcasting: 1 I0514 21:48:21.704569 6 log.go:172] (0xc006604dc0) (0xc002377d60) Stream removed, broadcasting: 3 I0514 21:48:21.704582 6 log.go:172] (0xc006604dc0) (0xc000dce320) Stream removed, broadcasting: 5 May 14 21:48:21.704: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:48:21.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0514 21:48:21.704875 6 log.go:172] (0xc006604dc0) Go away received STEP: Destroying namespace "pod-network-test-328" for this suite. • [SLOW TEST:26.450 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2072,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:48:21.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:48:21.874: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 14 21:48:23.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3582 create -f -' May 14 21:48:29.651: INFO: stderr: "" May 14 21:48:29.651: INFO: stdout: "e2e-test-crd-publish-openapi-5067-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 14 21:48:29.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3582 delete e2e-test-crd-publish-openapi-5067-crds test-cr' May 14 21:48:29.983: INFO: stderr: "" May 14 21:48:29.983: INFO: stdout: "e2e-test-crd-publish-openapi-5067-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 14 21:48:29.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3582 apply -f -' May 14 21:48:30.249: INFO: stderr: "" May 14 21:48:30.249: INFO: stdout: "e2e-test-crd-publish-openapi-5067-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 14 21:48:30.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3582 delete e2e-test-crd-publish-openapi-5067-crds test-cr' May 14 21:48:30.359: INFO: stderr: "" May 14 21:48:30.359: INFO: stdout: "e2e-test-crd-publish-openapi-5067-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 14 21:48:30.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5067-crds' May 14 21:48:30.615: INFO: stderr: "" May 14 21:48:30.615: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5067-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:48:33.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3582" for this suite. • [SLOW TEST:11.790 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":125,"skipped":2082,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:48:33.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 14 21:48:33.592: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:48:49.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3641" for this suite. • [SLOW TEST:15.988 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2092,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:48:49.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2342 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-2342 I0514 21:48:49.645875 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-2342, replica count: 2 I0514 21:48:52.696290 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 21:48:55.696564 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 14 21:48:55.696: INFO: Creating new exec pod May 14 21:49:00.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2342 execpodg4bnh -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 14 21:49:00.925: INFO: stderr: "I0514 21:49:00.842150 2398 log.go:172] (0xc000a1ac60) (0xc0009f43c0) Create stream\nI0514 21:49:00.842215 2398 log.go:172] (0xc000a1ac60) (0xc0009f43c0) Stream added, broadcasting: 1\nI0514 21:49:00.846098 2398 log.go:172] (0xc000a1ac60) Reply frame received for 1\nI0514 21:49:00.846152 2398 log.go:172] (0xc000a1ac60) (0xc000630780) Create stream\nI0514 21:49:00.846182 2398 log.go:172] (0xc000a1ac60) (0xc000630780) Stream added, broadcasting: 3\nI0514 21:49:00.847138 2398 log.go:172] (0xc000a1ac60) Reply frame received for 3\nI0514 21:49:00.847177 2398 log.go:172] (0xc000a1ac60) (0xc00079b540) Create stream\nI0514 21:49:00.847187 2398 log.go:172] (0xc000a1ac60) (0xc00079b540) Stream added, broadcasting: 5\nI0514 21:49:00.848079 2398 log.go:172] (0xc000a1ac60) Reply frame received for 5\nI0514 21:49:00.916704 2398 log.go:172] (0xc000a1ac60) Data frame received for 5\nI0514 21:49:00.916728 2398 log.go:172] (0xc00079b540) (5) Data frame handling\nI0514 21:49:00.916743 2398 log.go:172] (0xc00079b540) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0514 21:49:00.917672 2398 log.go:172] (0xc000a1ac60) Data frame received for 5\nI0514 21:49:00.917687 2398 log.go:172] (0xc00079b540) (5) Data frame handling\nI0514 21:49:00.917693 2398 log.go:172] (0xc00079b540) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0514 21:49:00.917791 2398 log.go:172] (0xc000a1ac60) Data frame received for 3\nI0514 21:49:00.917837 2398 log.go:172] (0xc000630780) (3) Data frame handling\nI0514 21:49:00.917879 2398 log.go:172] (0xc000a1ac60) Data frame received for 5\nI0514 21:49:00.917890 2398 log.go:172] (0xc00079b540) (5) Data frame handling\nI0514 21:49:00.920974 2398 log.go:172] (0xc000a1ac60) Data frame received for 1\nI0514 21:49:00.920992 2398 log.go:172] (0xc0009f43c0) (1) Data frame handling\nI0514 21:49:00.921016 2398 log.go:172] (0xc0009f43c0) (1) Data frame sent\nI0514 21:49:00.921025 2398 log.go:172] (0xc000a1ac60) (0xc0009f43c0) Stream removed, broadcasting: 1\nI0514 21:49:00.921084 2398 log.go:172] (0xc000a1ac60) Go away received\nI0514 21:49:00.921413 2398 log.go:172] (0xc000a1ac60) (0xc0009f43c0) Stream removed, broadcasting: 1\nI0514 21:49:00.921428 2398 log.go:172] (0xc000a1ac60) (0xc000630780) Stream removed, broadcasting: 3\nI0514 21:49:00.921434 2398 log.go:172] (0xc000a1ac60) (0xc00079b540) Stream removed, broadcasting: 5\n" May 14 21:49:00.926: INFO: stdout: "" May 14 21:49:00.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2342 execpodg4bnh -- /bin/sh -x -c nc -zv -t -w 2 10.106.123.127 80' May 14 21:49:01.146: INFO: stderr: "I0514 21:49:01.061627 2419 log.go:172] (0xc0003d8c60) (0xc0001a5c20) Create stream\nI0514 21:49:01.061697 2419 log.go:172] (0xc0003d8c60) (0xc0001a5c20) Stream added, broadcasting: 1\nI0514 21:49:01.063973 2419 log.go:172] (0xc0003d8c60) Reply frame received for 1\nI0514 21:49:01.064006 2419 log.go:172] (0xc0003d8c60) (0xc0001a5cc0) Create stream\nI0514 21:49:01.064016 2419 log.go:172] (0xc0003d8c60) (0xc0001a5cc0) Stream added, broadcasting: 3\nI0514 21:49:01.064898 2419 log.go:172] (0xc0003d8c60) Reply frame received for 3\nI0514 21:49:01.064932 2419 log.go:172] (0xc0003d8c60) (0xc00082a000) Create stream\nI0514 21:49:01.064943 2419 log.go:172] (0xc0003d8c60) (0xc00082a000) Stream added, broadcasting: 5\nI0514 21:49:01.065936 2419 log.go:172] (0xc0003d8c60) Reply frame received for 5\nI0514 21:49:01.138223 2419 log.go:172] (0xc0003d8c60) Data frame received for 3\nI0514 21:49:01.138272 2419 log.go:172] (0xc0001a5cc0) (3) Data frame handling\nI0514 21:49:01.138306 2419 log.go:172] (0xc0003d8c60) Data frame received for 5\nI0514 21:49:01.138325 2419 log.go:172] (0xc00082a000) (5) Data frame handling\nI0514 21:49:01.138355 2419 log.go:172] (0xc00082a000) (5) Data frame sent\nI0514 21:49:01.138375 2419 log.go:172] (0xc0003d8c60) Data frame received for 5\nI0514 21:49:01.138396 2419 log.go:172] (0xc00082a000) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.123.127 80\nConnection to 10.106.123.127 80 port [tcp/http] succeeded!\nI0514 21:49:01.140410 2419 log.go:172] (0xc0003d8c60) Data frame received for 1\nI0514 21:49:01.140443 2419 log.go:172] (0xc0001a5c20) (1) Data frame handling\nI0514 21:49:01.140470 2419 log.go:172] (0xc0001a5c20) (1) Data frame sent\nI0514 21:49:01.140493 2419 log.go:172] (0xc0003d8c60) (0xc0001a5c20) Stream removed, broadcasting: 1\nI0514 21:49:01.140511 2419 log.go:172] (0xc0003d8c60) Go away received\nI0514 21:49:01.140907 2419 log.go:172] (0xc0003d8c60) (0xc0001a5c20) Stream removed, broadcasting: 1\nI0514 21:49:01.140925 2419 log.go:172] (0xc0003d8c60) (0xc0001a5cc0) Stream removed, broadcasting: 3\nI0514 21:49:01.140934 2419 log.go:172] (0xc0003d8c60) (0xc00082a000) Stream removed, broadcasting: 5\n" May 14 21:49:01.146: INFO: stdout: "" May 14 21:49:01.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2342 execpodg4bnh -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31866' May 14 21:49:01.337: INFO: stderr: "I0514 21:49:01.264575 2442 log.go:172] (0xc000024160) (0xc0006dc0a0) Create stream\nI0514 21:49:01.264632 2442 log.go:172] (0xc000024160) (0xc0006dc0a0) Stream added, broadcasting: 1\nI0514 21:49:01.267519 2442 log.go:172] (0xc000024160) Reply frame received for 1\nI0514 21:49:01.267559 2442 log.go:172] (0xc000024160) (0xc000712aa0) Create stream\nI0514 21:49:01.267571 2442 log.go:172] (0xc000024160) (0xc000712aa0) Stream added, broadcasting: 3\nI0514 21:49:01.268548 2442 log.go:172] (0xc000024160) Reply frame received for 3\nI0514 21:49:01.268585 2442 log.go:172] (0xc000024160) (0xc000826000) Create stream\nI0514 21:49:01.268600 2442 log.go:172] (0xc000024160) (0xc000826000) Stream added, broadcasting: 5\nI0514 21:49:01.269657 2442 log.go:172] (0xc000024160) Reply frame received for 5\nI0514 21:49:01.329568 2442 log.go:172] (0xc000024160) Data frame received for 5\nI0514 21:49:01.329606 2442 log.go:172] (0xc000826000) (5) Data frame handling\nI0514 21:49:01.329622 2442 log.go:172] (0xc000826000) (5) Data frame sent\nI0514 21:49:01.329632 2442 log.go:172] (0xc000024160) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.10 31866\nI0514 21:49:01.329640 2442 log.go:172] (0xc000826000) (5) Data frame handling\nI0514 21:49:01.329692 2442 log.go:172] (0xc000826000) (5) Data frame sent\nConnection to 172.17.0.10 31866 port [tcp/31866] succeeded!\nI0514 21:49:01.329999 2442 log.go:172] (0xc000024160) Data frame received for 5\nI0514 21:49:01.330029 2442 log.go:172] (0xc000826000) (5) Data frame handling\nI0514 21:49:01.330153 2442 log.go:172] (0xc000024160) Data frame received for 3\nI0514 21:49:01.330197 2442 log.go:172] (0xc000712aa0) (3) Data frame handling\nI0514 21:49:01.331666 2442 log.go:172] (0xc000024160) Data frame received for 1\nI0514 21:49:01.331698 2442 log.go:172] (0xc0006dc0a0) (1) Data frame handling\nI0514 21:49:01.331723 2442 log.go:172] (0xc0006dc0a0) (1) Data frame sent\nI0514 21:49:01.331756 2442 log.go:172] (0xc000024160) (0xc0006dc0a0) Stream removed, broadcasting: 1\nI0514 21:49:01.331782 2442 log.go:172] (0xc000024160) Go away received\nI0514 21:49:01.332242 2442 log.go:172] (0xc000024160) (0xc0006dc0a0) Stream removed, broadcasting: 1\nI0514 21:49:01.332283 2442 log.go:172] (0xc000024160) (0xc000712aa0) Stream removed, broadcasting: 3\nI0514 21:49:01.332306 2442 log.go:172] (0xc000024160) (0xc000826000) Stream removed, broadcasting: 5\n" May 14 21:49:01.337: INFO: stdout: "" May 14 21:49:01.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2342 execpodg4bnh -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31866' May 14 21:49:01.564: INFO: stderr: "I0514 21:49:01.484146 2462 log.go:172] (0xc000aeaa50) (0xc000342000) Create stream\nI0514 21:49:01.484207 2462 log.go:172] (0xc000aeaa50) (0xc000342000) Stream added, broadcasting: 1\nI0514 21:49:01.487349 2462 log.go:172] (0xc000aeaa50) Reply frame received for 1\nI0514 21:49:01.487472 2462 log.go:172] (0xc000aeaa50) (0xc00067b9a0) Create stream\nI0514 21:49:01.487487 2462 log.go:172] (0xc000aeaa50) (0xc00067b9a0) Stream added, broadcasting: 3\nI0514 21:49:01.488899 2462 log.go:172] (0xc000aeaa50) Reply frame received for 3\nI0514 21:49:01.488945 2462 log.go:172] (0xc000aeaa50) (0xc000342140) Create stream\nI0514 21:49:01.488962 2462 log.go:172] (0xc000aeaa50) (0xc000342140) Stream added, broadcasting: 5\nI0514 21:49:01.490188 2462 log.go:172] (0xc000aeaa50) Reply frame received for 5\nI0514 21:49:01.559086 2462 log.go:172] (0xc000aeaa50) Data frame received for 3\nI0514 21:49:01.559115 2462 log.go:172] (0xc00067b9a0) (3) Data frame handling\nI0514 21:49:01.559131 2462 log.go:172] (0xc000aeaa50) Data frame received for 5\nI0514 21:49:01.559148 2462 log.go:172] (0xc000342140) (5) Data frame handling\nI0514 21:49:01.559169 2462 log.go:172] (0xc000342140) (5) Data frame sent\nI0514 21:49:01.559175 2462 log.go:172] (0xc000aeaa50) Data frame received for 5\nI0514 21:49:01.559181 2462 log.go:172] (0xc000342140) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 31866\nConnection to 172.17.0.8 31866 port [tcp/31866] succeeded!\nI0514 21:49:01.560823 2462 log.go:172] (0xc000aeaa50) Data frame received for 1\nI0514 21:49:01.560848 2462 log.go:172] (0xc000342000) (1) Data frame handling\nI0514 21:49:01.560864 2462 log.go:172] (0xc000342000) (1) Data frame sent\nI0514 21:49:01.560896 2462 log.go:172] (0xc000aeaa50) (0xc000342000) Stream removed, broadcasting: 1\nI0514 21:49:01.560957 2462 log.go:172] (0xc000aeaa50) Go away received\nI0514 21:49:01.561425 2462 log.go:172] (0xc000aeaa50) (0xc000342000) Stream removed, broadcasting: 1\nI0514 21:49:01.561446 2462 log.go:172] (0xc000aeaa50) (0xc00067b9a0) Stream removed, broadcasting: 3\nI0514 21:49:01.561455 2462 log.go:172] (0xc000aeaa50) (0xc000342140) Stream removed, broadcasting: 5\n" May 14 21:49:01.565: INFO: stdout: "" May 14 21:49:01.565: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:49:01.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2342" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.124 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":127,"skipped":2102,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:49:01.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:49:05.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1338" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2104,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:49:05.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 21:49:06.621: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created May 14 21:49:08.730: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089746, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089746, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089746, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089746, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 21:49:10.733: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089746, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089746, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089746, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725089746, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 21:49:13.766: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:49:13.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9245" for this suite. STEP: Destroying namespace "webhook-9245-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.155 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":129,"skipped":2114,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:49:13.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-e3f8a800-6784-4ac3-a424-c6ec5346ee8f in namespace container-probe-1471 May 14 21:49:18.097: INFO: Started pod busybox-e3f8a800-6784-4ac3-a424-c6ec5346ee8f in namespace container-probe-1471 STEP: checking the pod's current state and verifying that restartCount is present May 14 21:49:18.099: INFO: Initial restart count of pod busybox-e3f8a800-6784-4ac3-a424-c6ec5346ee8f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:53:19.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1471" for this suite. • [SLOW TEST:245.172 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2144,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:53:19.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 14 21:53:19.208: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9fd61b8-46cc-405f-a7c1-5402b143a87a" in namespace "downward-api-7220" to be "success or failure" May 14 21:53:19.226: INFO: Pod "downwardapi-volume-b9fd61b8-46cc-405f-a7c1-5402b143a87a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.731551ms May 14 21:53:21.230: INFO: Pod "downwardapi-volume-b9fd61b8-46cc-405f-a7c1-5402b143a87a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021852659s May 14 21:53:23.234: INFO: Pod "downwardapi-volume-b9fd61b8-46cc-405f-a7c1-5402b143a87a": Phase="Running", Reason="", readiness=true. Elapsed: 4.025974748s May 14 21:53:25.238: INFO: Pod "downwardapi-volume-b9fd61b8-46cc-405f-a7c1-5402b143a87a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029944106s STEP: Saw pod success May 14 21:53:25.238: INFO: Pod "downwardapi-volume-b9fd61b8-46cc-405f-a7c1-5402b143a87a" satisfied condition "success or failure" May 14 21:53:25.241: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-b9fd61b8-46cc-405f-a7c1-5402b143a87a container client-container: STEP: delete the pod May 14 21:53:25.294: INFO: Waiting for pod downwardapi-volume-b9fd61b8-46cc-405f-a7c1-5402b143a87a to disappear May 14 21:53:25.307: INFO: Pod downwardapi-volume-b9fd61b8-46cc-405f-a7c1-5402b143a87a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:53:25.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7220" for this suite. • [SLOW TEST:6.180 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2154,"failed":0} SSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:53:25.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-8bec4023-539b-4c06-80df-b3c4bcf04661 STEP: Creating secret with name s-test-opt-upd-da0cbd85-a63d-4744-831b-333807677621 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-8bec4023-539b-4c06-80df-b3c4bcf04661 STEP: Updating secret s-test-opt-upd-da0cbd85-a63d-4744-831b-333807677621 STEP: Creating secret with name s-test-opt-create-8ec1590f-8370-421d-be83-ff1b72d7ff6f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:54:54.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7108" for this suite. • [SLOW TEST:88.804 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:54:54.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1033 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 14 21:54:54.408: INFO: Found 0 stateful pods, waiting for 3 May 14 21:55:04.413: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 21:55:04.413: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 21:55:04.413: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 14 21:55:14.412: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 21:55:14.413: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 21:55:14.413: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 14 21:55:14.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1033 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 21:55:14.731: INFO: stderr: "I0514 21:55:14.564871 2483 log.go:172] (0xc00098c6e0) (0xc0005a4000) Create stream\nI0514 21:55:14.564915 2483 log.go:172] (0xc00098c6e0) (0xc0005a4000) Stream added, broadcasting: 1\nI0514 21:55:14.566832 2483 log.go:172] (0xc00098c6e0) Reply frame received for 1\nI0514 21:55:14.566873 2483 log.go:172] (0xc00098c6e0) (0xc000605cc0) Create stream\nI0514 21:55:14.566888 2483 log.go:172] (0xc00098c6e0) (0xc000605cc0) Stream added, broadcasting: 3\nI0514 21:55:14.567663 2483 log.go:172] (0xc00098c6e0) Reply frame received for 3\nI0514 21:55:14.567712 2483 log.go:172] (0xc00098c6e0) (0xc0001e8000) Create stream\nI0514 21:55:14.567727 2483 log.go:172] (0xc00098c6e0) (0xc0001e8000) Stream added, broadcasting: 5\nI0514 21:55:14.568398 2483 log.go:172] (0xc00098c6e0) Reply frame received for 5\nI0514 21:55:14.656934 2483 log.go:172] (0xc00098c6e0) Data frame received for 5\nI0514 21:55:14.656980 2483 log.go:172] (0xc0001e8000) (5) Data frame handling\nI0514 21:55:14.657013 2483 log.go:172] (0xc0001e8000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 21:55:14.723529 2483 log.go:172] (0xc00098c6e0) Data frame received for 3\nI0514 21:55:14.723566 2483 log.go:172] (0xc000605cc0) (3) Data frame handling\nI0514 21:55:14.723598 2483 log.go:172] (0xc000605cc0) (3) Data frame sent\nI0514 21:55:14.723765 2483 log.go:172] (0xc00098c6e0) Data frame received for 3\nI0514 21:55:14.723791 2483 log.go:172] (0xc000605cc0) (3) Data frame handling\nI0514 21:55:14.723826 2483 log.go:172] (0xc00098c6e0) Data frame received for 5\nI0514 21:55:14.723881 2483 log.go:172] (0xc0001e8000) (5) Data frame handling\nI0514 21:55:14.725780 2483 log.go:172] (0xc00098c6e0) Data frame received for 1\nI0514 21:55:14.725813 2483 log.go:172] (0xc0005a4000) (1) Data frame handling\nI0514 21:55:14.725831 2483 log.go:172] (0xc0005a4000) (1) Data frame sent\nI0514 21:55:14.725867 2483 log.go:172] (0xc00098c6e0) (0xc0005a4000) Stream removed, broadcasting: 1\nI0514 21:55:14.725893 2483 log.go:172] (0xc00098c6e0) Go away received\nI0514 21:55:14.726462 2483 log.go:172] (0xc00098c6e0) (0xc0005a4000) Stream removed, broadcasting: 1\nI0514 21:55:14.726488 2483 log.go:172] (0xc00098c6e0) (0xc000605cc0) Stream removed, broadcasting: 3\nI0514 21:55:14.726502 2483 log.go:172] (0xc00098c6e0) (0xc0001e8000) Stream removed, broadcasting: 5\n" May 14 21:55:14.732: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 21:55:14.732: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 14 21:55:24.760: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 14 21:55:34.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1033 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:55:35.036: INFO: stderr: "I0514 21:55:34.958771 2505 log.go:172] (0xc0009246e0) (0xc0009580a0) Create stream\nI0514 21:55:34.958831 2505 log.go:172] (0xc0009246e0) (0xc0009580a0) Stream added, broadcasting: 1\nI0514 21:55:34.961394 2505 log.go:172] (0xc0009246e0) Reply frame received for 1\nI0514 21:55:34.961432 2505 log.go:172] (0xc0009246e0) (0xc0006e3ae0) Create stream\nI0514 21:55:34.961451 2505 log.go:172] (0xc0009246e0) (0xc0006e3ae0) Stream added, broadcasting: 3\nI0514 21:55:34.962389 2505 log.go:172] (0xc0009246e0) Reply frame received for 3\nI0514 21:55:34.962440 2505 log.go:172] (0xc0009246e0) (0xc000622000) Create stream\nI0514 21:55:34.962457 2505 log.go:172] (0xc0009246e0) (0xc000622000) Stream added, broadcasting: 5\nI0514 21:55:34.963344 2505 log.go:172] (0xc0009246e0) Reply frame received for 5\nI0514 21:55:35.031493 2505 log.go:172] (0xc0009246e0) Data frame received for 5\nI0514 21:55:35.031529 2505 log.go:172] (0xc000622000) (5) Data frame handling\nI0514 21:55:35.031539 2505 log.go:172] (0xc000622000) (5) Data frame sent\nI0514 21:55:35.031548 2505 log.go:172] (0xc0009246e0) Data frame received for 5\nI0514 21:55:35.031558 2505 log.go:172] (0xc000622000) (5) Data frame handling\nI0514 21:55:35.031578 2505 log.go:172] (0xc0009246e0) Data frame received for 3\nI0514 21:55:35.031587 2505 log.go:172] (0xc0006e3ae0) (3) Data frame handling\nI0514 21:55:35.031596 2505 log.go:172] (0xc0006e3ae0) (3) Data frame sent\nI0514 21:55:35.031604 2505 log.go:172] (0xc0009246e0) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0514 21:55:35.031612 2505 log.go:172] (0xc0006e3ae0) (3) Data frame handling\nI0514 21:55:35.032821 2505 log.go:172] (0xc0009246e0) Data frame received for 1\nI0514 21:55:35.032850 2505 log.go:172] (0xc0009580a0) (1) Data frame handling\nI0514 21:55:35.032870 2505 log.go:172] (0xc0009580a0) (1) Data frame sent\nI0514 21:55:35.032887 2505 log.go:172] (0xc0009246e0) (0xc0009580a0) Stream removed, broadcasting: 1\nI0514 21:55:35.032906 2505 log.go:172] (0xc0009246e0) Go away received\nI0514 21:55:35.033423 2505 log.go:172] (0xc0009246e0) (0xc0009580a0) Stream removed, broadcasting: 1\nI0514 21:55:35.033445 2505 log.go:172] (0xc0009246e0) (0xc0006e3ae0) Stream removed, broadcasting: 3\nI0514 21:55:35.033455 2505 log.go:172] (0xc0009246e0) (0xc000622000) Stream removed, broadcasting: 5\n" May 14 21:55:35.036: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 14 21:55:35.036: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 14 21:55:45.055: INFO: Waiting for StatefulSet statefulset-1033/ss2 to complete update May 14 21:55:45.055: INFO: Waiting for Pod statefulset-1033/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 14 21:55:45.055: INFO: Waiting for Pod statefulset-1033/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 14 21:55:45.055: INFO: Waiting for Pod statefulset-1033/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 14 21:55:55.061: INFO: Waiting for StatefulSet statefulset-1033/ss2 to complete update May 14 21:55:55.061: INFO: Waiting for Pod statefulset-1033/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 14 21:55:55.061: INFO: Waiting for Pod statefulset-1033/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 14 21:56:05.061: INFO: Waiting for StatefulSet statefulset-1033/ss2 to complete update May 14 21:56:05.061: INFO: Waiting for Pod statefulset-1033/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 14 21:56:15.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1033 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 14 21:56:15.278: INFO: stderr: "I0514 21:56:15.188563 2525 log.go:172] (0xc0009f6b00) (0xc0007900a0) Create stream\nI0514 21:56:15.188640 2525 log.go:172] (0xc0009f6b00) (0xc0007900a0) Stream added, broadcasting: 1\nI0514 21:56:15.190431 2525 log.go:172] (0xc0009f6b00) Reply frame received for 1\nI0514 21:56:15.190475 2525 log.go:172] (0xc0009f6b00) (0xc000ae0000) Create stream\nI0514 21:56:15.190497 2525 log.go:172] (0xc0009f6b00) (0xc000ae0000) Stream added, broadcasting: 3\nI0514 21:56:15.191473 2525 log.go:172] (0xc0009f6b00) Reply frame received for 3\nI0514 21:56:15.191509 2525 log.go:172] (0xc0009f6b00) (0xc000791cc0) Create stream\nI0514 21:56:15.191524 2525 log.go:172] (0xc0009f6b00) (0xc000791cc0) Stream added, broadcasting: 5\nI0514 21:56:15.192439 2525 log.go:172] (0xc0009f6b00) Reply frame received for 5\nI0514 21:56:15.243249 2525 log.go:172] (0xc0009f6b00) Data frame received for 5\nI0514 21:56:15.243281 2525 log.go:172] (0xc000791cc0) (5) Data frame handling\nI0514 21:56:15.243300 2525 log.go:172] (0xc000791cc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0514 21:56:15.272584 2525 log.go:172] (0xc0009f6b00) Data frame received for 3\nI0514 21:56:15.272604 2525 log.go:172] (0xc000ae0000) (3) Data frame handling\nI0514 21:56:15.272656 2525 log.go:172] (0xc000ae0000) (3) Data frame sent\nI0514 21:56:15.272787 2525 log.go:172] (0xc0009f6b00) Data frame received for 5\nI0514 21:56:15.272825 2525 log.go:172] (0xc000791cc0) (5) Data frame handling\nI0514 21:56:15.272867 2525 log.go:172] (0xc0009f6b00) Data frame received for 3\nI0514 21:56:15.272900 2525 log.go:172] (0xc000ae0000) (3) Data frame handling\nI0514 21:56:15.274185 2525 log.go:172] (0xc0009f6b00) Data frame received for 1\nI0514 21:56:15.274194 2525 log.go:172] (0xc0007900a0) (1) Data frame handling\nI0514 21:56:15.274211 2525 log.go:172] (0xc0007900a0) (1) Data frame sent\nI0514 21:56:15.274318 2525 log.go:172] (0xc0009f6b00) (0xc0007900a0) Stream removed, broadcasting: 1\nI0514 21:56:15.274538 2525 log.go:172] (0xc0009f6b00) (0xc0007900a0) Stream removed, broadcasting: 1\nI0514 21:56:15.274554 2525 log.go:172] (0xc0009f6b00) (0xc000ae0000) Stream removed, broadcasting: 3\nI0514 21:56:15.274634 2525 log.go:172] (0xc0009f6b00) (0xc000791cc0) Stream removed, broadcasting: 5\n" May 14 21:56:15.279: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 14 21:56:15.279: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 14 21:56:25.308: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 14 21:56:35.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1033 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 14 21:56:35.571: INFO: stderr: "I0514 21:56:35.485620 2547 log.go:172] (0xc0005e06e0) (0xc00057c000) Create stream\nI0514 21:56:35.485677 2547 log.go:172] (0xc0005e06e0) (0xc00057c000) Stream added, broadcasting: 1\nI0514 21:56:35.487531 2547 log.go:172] (0xc0005e06e0) Reply frame received for 1\nI0514 21:56:35.487555 2547 log.go:172] (0xc0005e06e0) (0xc0006ffb80) Create stream\nI0514 21:56:35.487563 2547 log.go:172] (0xc0005e06e0) (0xc0006ffb80) Stream added, broadcasting: 3\nI0514 21:56:35.488177 2547 log.go:172] (0xc0005e06e0) Reply frame received for 3\nI0514 21:56:35.488202 2547 log.go:172] (0xc0005e06e0) (0xc00025a000) Create stream\nI0514 21:56:35.488209 2547 log.go:172] (0xc0005e06e0) (0xc00025a000) Stream added, broadcasting: 5\nI0514 21:56:35.488801 2547 log.go:172] (0xc0005e06e0) Reply frame received for 5\nI0514 21:56:35.562382 2547 log.go:172] (0xc0005e06e0) Data frame received for 3\nI0514 21:56:35.562427 2547 log.go:172] (0xc0006ffb80) (3) Data frame handling\nI0514 21:56:35.562464 2547 log.go:172] (0xc0006ffb80) (3) Data frame sent\nI0514 21:56:35.562496 2547 log.go:172] (0xc0005e06e0) Data frame received for 3\nI0514 21:56:35.562525 2547 log.go:172] (0xc0006ffb80) (3) Data frame handling\nI0514 21:56:35.562565 2547 log.go:172] (0xc0005e06e0) Data frame received for 5\nI0514 21:56:35.562585 2547 log.go:172] (0xc00025a000) (5) Data frame handling\nI0514 21:56:35.562618 2547 log.go:172] (0xc00025a000) (5) Data frame sent\nI0514 21:56:35.562648 2547 log.go:172] (0xc0005e06e0) Data frame received for 5\nI0514 21:56:35.562666 2547 log.go:172] (0xc00025a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0514 21:56:35.564386 2547 log.go:172] (0xc0005e06e0) Data frame received for 1\nI0514 21:56:35.564406 2547 log.go:172] (0xc00057c000) (1) Data frame handling\nI0514 21:56:35.564425 2547 log.go:172] (0xc00057c000) (1) Data frame sent\nI0514 21:56:35.564656 2547 log.go:172] (0xc0005e06e0) (0xc00057c000) Stream removed, broadcasting: 1\nI0514 21:56:35.564694 2547 log.go:172] (0xc0005e06e0) Go away received\nI0514 21:56:35.565331 2547 log.go:172] (0xc0005e06e0) (0xc00057c000) Stream removed, broadcasting: 1\nI0514 21:56:35.565398 2547 log.go:172] (0xc0005e06e0) (0xc0006ffb80) Stream removed, broadcasting: 3\nI0514 21:56:35.565412 2547 log.go:172] (0xc0005e06e0) (0xc00025a000) Stream removed, broadcasting: 5\n" May 14 21:56:35.571: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 14 21:56:35.571: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 14 21:56:45.592: INFO: Waiting for StatefulSet statefulset-1033/ss2 to complete update May 14 21:56:45.592: INFO: Waiting for Pod statefulset-1033/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 14 21:56:45.592: INFO: Waiting for Pod statefulset-1033/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 14 21:56:45.592: INFO: Waiting for Pod statefulset-1033/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 14 21:56:55.600: INFO: Waiting for StatefulSet statefulset-1033/ss2 to complete update May 14 21:56:55.600: INFO: Waiting for Pod statefulset-1033/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 14 21:56:55.600: INFO: Waiting for Pod statefulset-1033/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 14 21:57:05.599: INFO: Deleting all statefulset in ns statefulset-1033 May 14 21:57:05.602: INFO: Scaling statefulset ss2 to 0 May 14 21:57:35.626: INFO: Waiting for statefulset status.replicas updated to 0 May 14 21:57:35.629: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:57:35.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1033" for this suite. • [SLOW TEST:161.512 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":133,"skipped":2247,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:57:35.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 14 21:57:35.786: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:57:35.798: INFO: Number of nodes with available pods: 0 May 14 21:57:35.798: INFO: Node jerma-worker is running more than one daemon pod May 14 21:57:36.803: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:57:36.806: INFO: Number of nodes with available pods: 0 May 14 21:57:36.806: INFO: Node jerma-worker is running more than one daemon pod May 14 21:57:37.802: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:57:37.805: INFO: Number of nodes with available pods: 0 May 14 21:57:37.805: INFO: Node jerma-worker is running more than one daemon pod May 14 21:57:38.802: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:57:38.805: INFO: Number of nodes with available pods: 0 May 14 21:57:38.805: INFO: Node jerma-worker is running more than one daemon pod May 14 21:57:39.803: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:57:39.806: INFO: Number of nodes with available pods: 1 May 14 21:57:39.806: INFO: Node jerma-worker2 is running more than one daemon pod May 14 21:57:40.803: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:57:40.805: INFO: Number of nodes with available pods: 2 May 14 21:57:40.805: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 14 21:57:40.815: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:57:40.839: INFO: Number of nodes with available pods: 1 May 14 21:57:40.839: INFO: Node jerma-worker is running more than one daemon pod May 14 21:57:41.887: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:57:41.898: INFO: Number of nodes with available pods: 1 May 14 21:57:41.898: INFO: Node jerma-worker is running more than one daemon pod May 14 21:57:42.872: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:57:42.923: INFO: Number of nodes with available pods: 1 May 14 21:57:42.923: INFO: Node jerma-worker is running more than one daemon pod May 14 21:57:43.853: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:57:43.943: INFO: Number of nodes with available pods: 1 May 14 21:57:43.943: INFO: Node jerma-worker is running more than one daemon pod May 14 21:57:45.267: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 21:57:45.269: INFO: Number of nodes with available pods: 2 May 14 21:57:45.269: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8055, will wait for the garbage collector to delete the pods May 14 21:57:45.331: INFO: Deleting DaemonSet.extensions daemon-set took: 6.461118ms May 14 21:57:45.832: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.238577ms May 14 21:57:59.535: INFO: Number of nodes with available pods: 0 May 14 21:57:59.536: INFO: Number of running nodes: 0, number of available pods: 0 May 14 21:57:59.539: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8055/daemonsets","resourceVersion":"16215213"},"items":null} May 14 21:57:59.541: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8055/pods","resourceVersion":"16215213"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:57:59.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8055" for this suite. • [SLOW TEST:23.901 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":134,"skipped":2279,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:57:59.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:57:59.661: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 14 21:58:01.768: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:58:03.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7793" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":135,"skipped":2287,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:58:03.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:58:03.580: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:58:05.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2524" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":136,"skipped":2307,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:58:05.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 21:58:06.571: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 21:58:08.918: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725090286, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725090286, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725090286, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725090286, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 21:58:11.965: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:58:12.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6713" for this suite. STEP: Destroying namespace "webhook-6713-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.807 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":137,"skipped":2312,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:58:12.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8970 STEP: creating a selector STEP: Creating the service pods in kubernetes May 14 21:58:12.268: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 14 21:58:40.555: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.195:8080/dial?request=hostname&protocol=udp&host=10.244.1.110&port=8081&tries=1'] Namespace:pod-network-test-8970 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 21:58:40.555: INFO: >>> kubeConfig: /root/.kube/config I0514 21:58:40.585847 6 log.go:172] (0xc001b6fad0) (0xc0022b6320) Create stream I0514 21:58:40.585875 6 log.go:172] (0xc001b6fad0) (0xc0022b6320) Stream added, broadcasting: 1 I0514 21:58:40.588397 6 log.go:172] (0xc001b6fad0) Reply frame received for 1 I0514 21:58:40.588436 6 log.go:172] (0xc001b6fad0) (0xc000d3abe0) Create stream I0514 21:58:40.588449 6 log.go:172] (0xc001b6fad0) (0xc000d3abe0) Stream added, broadcasting: 3 I0514 21:58:40.589909 6 log.go:172] (0xc001b6fad0) Reply frame received for 3 I0514 21:58:40.589947 6 log.go:172] (0xc001b6fad0) (0xc000e4a280) Create stream I0514 21:58:40.589960 6 log.go:172] (0xc001b6fad0) (0xc000e4a280) Stream added, broadcasting: 5 I0514 21:58:40.590855 6 log.go:172] (0xc001b6fad0) Reply frame received for 5 I0514 21:58:40.677811 6 log.go:172] (0xc001b6fad0) Data frame received for 3 I0514 21:58:40.677888 6 log.go:172] (0xc000d3abe0) (3) Data frame handling I0514 21:58:40.677911 6 log.go:172] (0xc000d3abe0) (3) Data frame sent I0514 21:58:40.678105 6 log.go:172] (0xc001b6fad0) Data frame received for 5 I0514 21:58:40.678125 6 log.go:172] (0xc000e4a280) (5) Data frame handling I0514 21:58:40.678144 6 log.go:172] (0xc001b6fad0) Data frame received for 3 I0514 21:58:40.678157 6 log.go:172] (0xc000d3abe0) (3) Data frame handling I0514 21:58:40.679918 6 log.go:172] (0xc001b6fad0) Data frame received for 1 I0514 21:58:40.679939 6 log.go:172] (0xc0022b6320) (1) Data frame handling I0514 21:58:40.679991 6 log.go:172] (0xc0022b6320) (1) Data frame sent I0514 21:58:40.680015 6 log.go:172] (0xc001b6fad0) (0xc0022b6320) Stream removed, broadcasting: 1 I0514 21:58:40.680039 6 log.go:172] (0xc001b6fad0) Go away received I0514 21:58:40.680195 6 log.go:172] (0xc001b6fad0) (0xc0022b6320) Stream removed, broadcasting: 1 I0514 21:58:40.680217 6 log.go:172] (0xc001b6fad0) (0xc000d3abe0) Stream removed, broadcasting: 3 I0514 21:58:40.680228 6 log.go:172] (0xc001b6fad0) (0xc000e4a280) Stream removed, broadcasting: 5 May 14 21:58:40.680: INFO: Waiting for responses: map[] May 14 21:58:40.683: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.195:8080/dial?request=hostname&protocol=udp&host=10.244.2.194&port=8081&tries=1'] Namespace:pod-network-test-8970 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 21:58:40.683: INFO: >>> kubeConfig: /root/.kube/config I0514 21:58:40.716326 6 log.go:172] (0xc003156420) (0xc000d3bae0) Create stream I0514 21:58:40.716365 6 log.go:172] (0xc003156420) (0xc000d3bae0) Stream added, broadcasting: 1 I0514 21:58:40.719272 6 log.go:172] (0xc003156420) Reply frame received for 1 I0514 21:58:40.719304 6 log.go:172] (0xc003156420) (0xc0022b63c0) Create stream I0514 21:58:40.719314 6 log.go:172] (0xc003156420) (0xc0022b63c0) Stream added, broadcasting: 3 I0514 21:58:40.720664 6 log.go:172] (0xc003156420) Reply frame received for 3 I0514 21:58:40.720700 6 log.go:172] (0xc003156420) (0xc000d3bd60) Create stream I0514 21:58:40.720722 6 log.go:172] (0xc003156420) (0xc000d3bd60) Stream added, broadcasting: 5 I0514 21:58:40.722181 6 log.go:172] (0xc003156420) Reply frame received for 5 I0514 21:58:40.782085 6 log.go:172] (0xc003156420) Data frame received for 3 I0514 21:58:40.782125 6 log.go:172] (0xc0022b63c0) (3) Data frame handling I0514 21:58:40.782151 6 log.go:172] (0xc0022b63c0) (3) Data frame sent I0514 21:58:40.783049 6 log.go:172] (0xc003156420) Data frame received for 5 I0514 21:58:40.783072 6 log.go:172] (0xc000d3bd60) (5) Data frame handling I0514 21:58:40.783127 6 log.go:172] (0xc003156420) Data frame received for 3 I0514 21:58:40.783155 6 log.go:172] (0xc0022b63c0) (3) Data frame handling I0514 21:58:40.784824 6 log.go:172] (0xc003156420) Data frame received for 1 I0514 21:58:40.784836 6 log.go:172] (0xc000d3bae0) (1) Data frame handling I0514 21:58:40.784846 6 log.go:172] (0xc000d3bae0) (1) Data frame sent I0514 21:58:40.784857 6 log.go:172] (0xc003156420) (0xc000d3bae0) Stream removed, broadcasting: 1 I0514 21:58:40.784945 6 log.go:172] (0xc003156420) Go away received I0514 21:58:40.785006 6 log.go:172] (0xc003156420) (0xc000d3bae0) Stream removed, broadcasting: 1 I0514 21:58:40.785053 6 log.go:172] (0xc003156420) (0xc0022b63c0) Stream removed, broadcasting: 3 I0514 21:58:40.785361 6 log.go:172] (0xc003156420) (0xc000d3bd60) Stream removed, broadcasting: 5 May 14 21:58:40.785: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:58:40.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8970" for this suite. • [SLOW TEST:28.583 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2399,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:58:40.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-7qsv STEP: Creating a pod to test atomic-volume-subpath May 14 21:58:40.887: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-7qsv" in namespace "subpath-4734" to be "success or failure" May 14 21:58:40.890: INFO: Pod "pod-subpath-test-downwardapi-7qsv": Phase="Pending", Reason="", readiness=false. Elapsed: 3.470502ms May 14 21:58:42.895: INFO: Pod "pod-subpath-test-downwardapi-7qsv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008429105s May 14 21:58:44.898: INFO: Pod "pod-subpath-test-downwardapi-7qsv": Phase="Running", Reason="", readiness=true. Elapsed: 4.01159087s May 14 21:58:46.962: INFO: Pod "pod-subpath-test-downwardapi-7qsv": Phase="Running", Reason="", readiness=true. Elapsed: 6.075093255s May 14 21:58:48.967: INFO: Pod "pod-subpath-test-downwardapi-7qsv": Phase="Running", Reason="", readiness=true. Elapsed: 8.079951181s May 14 21:58:50.986: INFO: Pod "pod-subpath-test-downwardapi-7qsv": Phase="Running", Reason="", readiness=true. Elapsed: 10.09904161s May 14 21:58:52.990: INFO: Pod "pod-subpath-test-downwardapi-7qsv": Phase="Running", Reason="", readiness=true. Elapsed: 12.103690825s May 14 21:58:54.994: INFO: Pod "pod-subpath-test-downwardapi-7qsv": Phase="Running", Reason="", readiness=true. Elapsed: 14.107464024s May 14 21:58:56.998: INFO: Pod "pod-subpath-test-downwardapi-7qsv": Phase="Running", Reason="", readiness=true. Elapsed: 16.111095608s May 14 21:58:59.001: INFO: Pod "pod-subpath-test-downwardapi-7qsv": Phase="Running", Reason="", readiness=true. Elapsed: 18.114615114s May 14 21:59:01.010: INFO: Pod "pod-subpath-test-downwardapi-7qsv": Phase="Running", Reason="", readiness=true. Elapsed: 20.123609996s May 14 21:59:03.015: INFO: Pod "pod-subpath-test-downwardapi-7qsv": Phase="Running", Reason="", readiness=true. Elapsed: 22.128287344s May 14 21:59:05.019: INFO: Pod "pod-subpath-test-downwardapi-7qsv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.132108909s STEP: Saw pod success May 14 21:59:05.019: INFO: Pod "pod-subpath-test-downwardapi-7qsv" satisfied condition "success or failure" May 14 21:59:05.022: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-7qsv container test-container-subpath-downwardapi-7qsv: STEP: delete the pod May 14 21:59:05.065: INFO: Waiting for pod pod-subpath-test-downwardapi-7qsv to disappear May 14 21:59:05.069: INFO: Pod pod-subpath-test-downwardapi-7qsv no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-7qsv May 14 21:59:05.069: INFO: Deleting pod "pod-subpath-test-downwardapi-7qsv" in namespace "subpath-4734" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:59:05.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4734" for this suite. • [SLOW TEST:24.291 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":139,"skipped":2422,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:59:05.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 14 21:59:05.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-366' May 14 21:59:08.221: INFO: stderr: "" May 14 21:59:08.221: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 14 21:59:08.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-366' May 14 21:59:12.639: INFO: stderr: "" May 14 21:59:12.639: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:59:12.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-366" for this suite. • [SLOW TEST:7.592 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":140,"skipped":2447,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:59:12.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:59:12.869: INFO: Waiting up to 5m0s for pod "busybox-user-65534-4bbec6ad-13fe-4b5b-a098-81824b9e4adf" in namespace "security-context-test-1712" to be "success or failure" May 14 21:59:12.891: INFO: Pod "busybox-user-65534-4bbec6ad-13fe-4b5b-a098-81824b9e4adf": Phase="Pending", Reason="", readiness=false. Elapsed: 22.07198ms May 14 21:59:14.895: INFO: Pod "busybox-user-65534-4bbec6ad-13fe-4b5b-a098-81824b9e4adf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026345581s May 14 21:59:16.900: INFO: Pod "busybox-user-65534-4bbec6ad-13fe-4b5b-a098-81824b9e4adf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031285991s May 14 21:59:16.900: INFO: Pod "busybox-user-65534-4bbec6ad-13fe-4b5b-a098-81824b9e4adf" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:59:16.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1712" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2526,"failed":0} S ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:59:16.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-d4b63f56-708e-42ab-84b0-e5f5c30d3850 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:59:16.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5882" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":142,"skipped":2527,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:59:17.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 14 21:59:17.068: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 14 21:59:28.452: INFO: >>> kubeConfig: /root/.kube/config May 14 21:59:30.390: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:59:40.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7550" for this suite. • [SLOW TEST:23.908 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":143,"skipped":2567,"failed":0} SS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:59:40.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-962/configmap-test-ee3c4d8a-2b8b-4508-bc97-2acad771a5b0 STEP: Creating a pod to test consume configMaps May 14 21:59:41.020: INFO: Waiting up to 5m0s for pod "pod-configmaps-4dc0934a-6de3-4f55-bfe9-e149b42ba537" in namespace "configmap-962" to be "success or failure" May 14 21:59:41.023: INFO: Pod "pod-configmaps-4dc0934a-6de3-4f55-bfe9-e149b42ba537": Phase="Pending", Reason="", readiness=false. Elapsed: 2.973989ms May 14 21:59:43.028: INFO: Pod "pod-configmaps-4dc0934a-6de3-4f55-bfe9-e149b42ba537": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007485715s May 14 21:59:45.034: INFO: Pod "pod-configmaps-4dc0934a-6de3-4f55-bfe9-e149b42ba537": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013221723s STEP: Saw pod success May 14 21:59:45.034: INFO: Pod "pod-configmaps-4dc0934a-6de3-4f55-bfe9-e149b42ba537" satisfied condition "success or failure" May 14 21:59:45.036: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-4dc0934a-6de3-4f55-bfe9-e149b42ba537 container env-test: STEP: delete the pod May 14 21:59:45.068: INFO: Waiting for pod pod-configmaps-4dc0934a-6de3-4f55-bfe9-e149b42ba537 to disappear May 14 21:59:45.071: INFO: Pod pod-configmaps-4dc0934a-6de3-4f55-bfe9-e149b42ba537 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:59:45.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-962" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2569,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:59:45.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 21:59:45.152: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-f2d2c9ff-ea51-4f47-b25e-7fb041ceb9ca" in namespace "security-context-test-4118" to be "success or failure" May 14 21:59:45.171: INFO: Pod "busybox-privileged-false-f2d2c9ff-ea51-4f47-b25e-7fb041ceb9ca": Phase="Pending", Reason="", readiness=false. Elapsed: 18.362237ms May 14 21:59:47.175: INFO: Pod "busybox-privileged-false-f2d2c9ff-ea51-4f47-b25e-7fb041ceb9ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022188081s May 14 21:59:49.178: INFO: Pod "busybox-privileged-false-f2d2c9ff-ea51-4f47-b25e-7fb041ceb9ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025605064s May 14 21:59:49.178: INFO: Pod "busybox-privileged-false-f2d2c9ff-ea51-4f47-b25e-7fb041ceb9ca" satisfied condition "success or failure" May 14 21:59:49.183: INFO: Got logs for pod "busybox-privileged-false-f2d2c9ff-ea51-4f47-b25e-7fb041ceb9ca": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:59:49.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4118" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2571,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:59:49.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 14 21:59:49.435: INFO: Waiting up to 5m0s for pod "var-expansion-643fe77e-f986-4dc3-8209-57258b927b51" in namespace "var-expansion-7473" to be "success or failure" May 14 21:59:49.464: INFO: Pod "var-expansion-643fe77e-f986-4dc3-8209-57258b927b51": Phase="Pending", Reason="", readiness=false. Elapsed: 29.009843ms May 14 21:59:51.486: INFO: Pod "var-expansion-643fe77e-f986-4dc3-8209-57258b927b51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050292596s May 14 21:59:53.490: INFO: Pod "var-expansion-643fe77e-f986-4dc3-8209-57258b927b51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054741283s STEP: Saw pod success May 14 21:59:53.490: INFO: Pod "var-expansion-643fe77e-f986-4dc3-8209-57258b927b51" satisfied condition "success or failure" May 14 21:59:53.493: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-643fe77e-f986-4dc3-8209-57258b927b51 container dapi-container: STEP: delete the pod May 14 21:59:53.558: INFO: Waiting for pod var-expansion-643fe77e-f986-4dc3-8209-57258b927b51 to disappear May 14 21:59:53.568: INFO: Pod var-expansion-643fe77e-f986-4dc3-8209-57258b927b51 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:59:53.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7473" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2585,"failed":0} SS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:59:53.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 14 21:59:58.460: INFO: Successfully updated pod "pod-update-732d352b-26ad-42f4-a0ba-8a9b984aca0e" STEP: verifying the updated pod is in kubernetes May 14 21:59:58.475: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 21:59:58.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6086" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2587,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 21:59:58.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 14 21:59:58.664: INFO: Waiting up to 5m0s for pod "pod-a8235d49-1d68-4dba-a849-041f7362d4ac" in namespace "emptydir-3529" to be "success or failure" May 14 21:59:58.754: INFO: Pod "pod-a8235d49-1d68-4dba-a849-041f7362d4ac": Phase="Pending", Reason="", readiness=false. Elapsed: 90.021339ms May 14 22:00:00.758: INFO: Pod "pod-a8235d49-1d68-4dba-a849-041f7362d4ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093726338s May 14 22:00:02.762: INFO: Pod "pod-a8235d49-1d68-4dba-a849-041f7362d4ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098652583s STEP: Saw pod success May 14 22:00:02.763: INFO: Pod "pod-a8235d49-1d68-4dba-a849-041f7362d4ac" satisfied condition "success or failure" May 14 22:00:02.766: INFO: Trying to get logs from node jerma-worker2 pod pod-a8235d49-1d68-4dba-a849-041f7362d4ac container test-container: STEP: delete the pod May 14 22:00:02.918: INFO: Waiting for pod pod-a8235d49-1d68-4dba-a849-041f7362d4ac to disappear May 14 22:00:02.928: INFO: Pod pod-a8235d49-1d68-4dba-a849-041f7362d4ac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:00:02.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3529" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2587,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:00:02.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:00:20.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1310" for this suite. • [SLOW TEST:17.218 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":149,"skipped":2589,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:00:20.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 14 22:00:20.236: INFO: Waiting up to 5m0s for pod "pod-b3da73ab-17cf-45b9-a7e6-1c00905234a9" in namespace "emptydir-3704" to be "success or failure" May 14 22:00:20.240: INFO: Pod "pod-b3da73ab-17cf-45b9-a7e6-1c00905234a9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.693491ms May 14 22:00:22.244: INFO: Pod "pod-b3da73ab-17cf-45b9-a7e6-1c00905234a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008041458s May 14 22:00:24.280: INFO: Pod "pod-b3da73ab-17cf-45b9-a7e6-1c00905234a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04400833s STEP: Saw pod success May 14 22:00:24.280: INFO: Pod "pod-b3da73ab-17cf-45b9-a7e6-1c00905234a9" satisfied condition "success or failure" May 14 22:00:24.283: INFO: Trying to get logs from node jerma-worker2 pod pod-b3da73ab-17cf-45b9-a7e6-1c00905234a9 container test-container: STEP: delete the pod May 14 22:00:24.303: INFO: Waiting for pod pod-b3da73ab-17cf-45b9-a7e6-1c00905234a9 to disappear May 14 22:00:24.306: INFO: Pod pod-b3da73ab-17cf-45b9-a7e6-1c00905234a9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:00:24.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3704" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2595,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:00:24.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 22:00:24.420: INFO: Creating deployment "webserver-deployment" May 14 22:00:24.437: INFO: Waiting for observed generation 1 May 14 22:00:26.545: INFO: Waiting for all required pods to come up May 14 22:00:26.550: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 14 22:00:38.562: INFO: Waiting for deployment "webserver-deployment" to complete May 14 22:00:38.567: INFO: Updating deployment "webserver-deployment" with a non-existent image May 14 22:00:38.572: INFO: Updating deployment webserver-deployment May 14 22:00:38.572: INFO: Waiting for observed generation 2 May 14 22:00:40.662: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 14 22:00:40.741: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 14 22:00:40.745: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 14 22:00:40.752: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 14 22:00:40.753: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 14 22:00:40.910: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 14 22:00:40.914: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 14 22:00:40.914: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 14 22:00:40.919: INFO: Updating deployment webserver-deployment May 14 22:00:40.919: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 14 22:00:41.540: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 14 22:00:42.036: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 14 22:00:45.063: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6885 /apis/apps/v1/namespaces/deployment-6885/deployments/webserver-deployment 5308b0d0-c588-487d-b268-9fe9f273124e 16216467 3 2020-05-14 22:00:24 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0053553d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-14 22:00:41 +0000 UTC,LastTransitionTime:2020-05-14 22:00:41 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-14 22:00:42 +0000 UTC,LastTransitionTime:2020-05-14 22:00:24 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 14 22:00:45.198: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-6885 /apis/apps/v1/namespaces/deployment-6885/replicasets/webserver-deployment-c7997dcc8 c40d4184-6a0f-4df0-89a3-35613998d315 16216459 3 2020-05-14 22:00:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 5308b0d0-c588-487d-b268-9fe9f273124e 0xc0053558a7 0xc0053558a8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005355918 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 14 22:00:45.198: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 14 22:00:45.198: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-6885 /apis/apps/v1/namespaces/deployment-6885/replicasets/webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 16216442 3 2020-05-14 22:00:24 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 5308b0d0-c588-487d-b268-9fe9f273124e 0xc0053557e7 0xc0053557e8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005355848 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 14 22:00:45.359: INFO: Pod "webserver-deployment-595b5b9587-5pmt8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5pmt8 webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-5pmt8 5e067e1e-fcfd-4e98-b23a-464f87e48bfa 16216477 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc005355dc7 0xc005355dc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-14 22:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.360: INFO: Pod "webserver-deployment-595b5b9587-5v2jb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5v2jb webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-5v2jb 31a4ceac-4f07-4618-8dcc-a4a69b633fe0 16216294 0 2020-05-14 22:00:24 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc005355f27 0xc005355f28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.115,StartTime:2020-05-14 22:00:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 22:00:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4dee2c5fac09149a4b193822b8638f9bab1164157bdfa0e3b687cc333d4aaf80,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.115,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.360: INFO: Pod "webserver-deployment-595b5b9587-678n9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-678n9 webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-678n9 6609a48e-369a-4f3e-a0a7-e41febf9c552 16216464 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc0053000a7 0xc0053000a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-14 22:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.360: INFO: Pod "webserver-deployment-595b5b9587-6h2lt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6h2lt webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-6h2lt 28b79303-d917-44cb-84ac-0d0f0c71b497 16216471 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc0053002a7 0xc0053002a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-14 22:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.361: INFO: Pod "webserver-deployment-595b5b9587-75qxf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-75qxf webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-75qxf a2ed1cb8-cde3-4fc6-ad51-bdc187ecfad8 16216493 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc005300427 0xc005300428}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-14 22:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.361: INFO: Pod "webserver-deployment-595b5b9587-7dmgm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7dmgm webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-7dmgm b5c3a679-6622-4dd2-81bf-d233debf1176 16216241 0 2020-05-14 22:00:24 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc0053005c7 0xc0053005c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.113,StartTime:2020-05-14 22:00:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 22:00:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f1e0e81b60c1d8c6dd7d12b3265c1f3f78fced3a4153b2fcb1eb0dc33b35bd1e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.113,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.361: INFO: Pod "webserver-deployment-595b5b9587-c5qgj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c5qgj webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-c5qgj 14170259-a811-4627-b13b-dda97a61b150 16216487 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc005300797 0xc005300798}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-14 22:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.361: INFO: Pod "webserver-deployment-595b5b9587-cnf2n" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cnf2n webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-cnf2n 20427044-e2b3-435b-b102-13caf299155a 16216498 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc0053009f7 0xc0053009f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-14 22:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.362: INFO: Pod "webserver-deployment-595b5b9587-dcvk2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dcvk2 webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-dcvk2 f136ce6a-898d-484d-8f0b-05d78dc00007 16216259 0 2020-05-14 22:00:24 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc005300c47 0xc005300c48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.203,StartTime:2020-05-14 22:00:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 22:00:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5d357e709af19223dbb8c95c2ea8c55b03978504d9d431eb427c16d684c8b0a2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.203,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.362: INFO: Pod "webserver-deployment-595b5b9587-djx8r" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-djx8r webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-djx8r 7b1b9b4b-c8f9-495f-ab84-38a01c2ec448 16216500 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc005300e77 0xc005300e78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-14 22:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.362: INFO: Pod "webserver-deployment-595b5b9587-fpjnw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fpjnw webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-fpjnw e71745b1-bd0e-4936-87bb-c15c5aa0b443 16216478 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc005301057 0xc005301058}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-14 22:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.362: INFO: Pod "webserver-deployment-595b5b9587-ktrw8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ktrw8 webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-ktrw8 c9acc269-ea91-41f5-9b7d-4aae084e8ea0 16216510 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc005301267 0xc005301268}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-14 22:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.363: INFO: Pod "webserver-deployment-595b5b9587-lt9ns" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lt9ns webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-lt9ns ba92a3bc-3430-4b7d-8263-cba824ef4ffe 16216474 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc0053013f7 0xc0053013f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-14 22:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.363: INFO: Pod "webserver-deployment-595b5b9587-ngq4d" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ngq4d webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-ngq4d 264cd1d0-3859-4bf5-9d2a-c558c495c1a3 16216469 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc005301657 0xc005301658}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-14 22:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.363: INFO: Pod "webserver-deployment-595b5b9587-tbdxh" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tbdxh webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-tbdxh 284e6451-f7e8-475d-92dc-dd57cd8a93b2 16216286 0 2020-05-14 22:00:24 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc005301817 0xc005301818}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.204,StartTime:2020-05-14 22:00:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 22:00:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://df39333844155b555f5571757c22b973b169612dbff70d63d99de64ef5f324da,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.204,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.363: INFO: Pod "webserver-deployment-595b5b9587-tnn9g" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tnn9g webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-tnn9g 397e6119-8d7d-4a22-a0bc-1d8cb64d90ce 16216448 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc0053019d7 0xc0053019d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-14 22:00:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.364: INFO: Pod "webserver-deployment-595b5b9587-v656k" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v656k webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-v656k 8e719a18-4c8d-490d-97a0-323bf64db28d 16216307 0 2020-05-14 22:00:24 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc005301bd7 0xc005301bd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.205,StartTime:2020-05-14 22:00:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 22:00:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://618923f44090c80b1b0495f968e7ac195da890b869f0a09199149404c5c7ce6a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.205,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.364: INFO: Pod "webserver-deployment-595b5b9587-v8xjg" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v8xjg webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-v8xjg ab08e977-63ce-4a8c-a915-d57609b0062a 16216264 0 2020-05-14 22:00:24 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc005301d87 0xc005301d88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.114,StartTime:2020-05-14 22:00:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 22:00:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a7f1063a67d770e2d79c2b7e86a7171c6e026ac04be802583ef508ddd8b9cc63,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.114,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.364: INFO: Pod "webserver-deployment-595b5b9587-xmnc2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xmnc2 webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-xmnc2 ebbc472a-5059-44dc-ace6-08014032abd5 16216298 0 2020-05-14 22:00:24 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc005301f97 0xc005301f98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.116,StartTime:2020-05-14 22:00:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 22:00:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://46fd4768ce389b43ca452acf44fcf78545088dd82456a6386e1088133a1130dd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.116,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.364: INFO: Pod "webserver-deployment-595b5b9587-zrh2p" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zrh2p webserver-deployment-595b5b9587- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-595b5b9587-zrh2p a834419b-a463-4e22-a036-3ab0e8cfbb85 16216289 0 2020-05-14 22:00:24 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 77936154-4731-4beb-9101-e6807790dab5 0xc0052da147 0xc0052da148}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.117,StartTime:2020-05-14 22:00:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 22:00:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1099868138d95ca05cbf22e27d856fefbefd7e087e7c6157179587f9c1cd8cc9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.117,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.364: INFO: Pod "webserver-deployment-c7997dcc8-266t5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-266t5 webserver-deployment-c7997dcc8- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-c7997dcc8-266t5 4d071601-92a5-4409-9889-457b387cae1c 16216506 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c40d4184-6a0f-4df0-89a3-35613998d315 0xc0052da2c7 0xc0052da2c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-14 22:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.365: INFO: Pod "webserver-deployment-c7997dcc8-cxxcs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cxxcs webserver-deployment-c7997dcc8- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-c7997dcc8-cxxcs 250455e9-ad4b-4a63-acf5-65814d7fbe59 16216358 0 2020-05-14 22:00:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c40d4184-6a0f-4df0-89a3-35613998d315 0xc0052da517 0xc0052da518}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-14 22:00:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.365: INFO: Pod "webserver-deployment-c7997dcc8-d24r6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d24r6 webserver-deployment-c7997dcc8- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-c7997dcc8-d24r6 21e6033f-569e-462f-930d-b7a8fb61ebd0 16216484 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c40d4184-6a0f-4df0-89a3-35613998d315 0xc0052da6f7 0xc0052da6f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-14 22:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.365: INFO: Pod "webserver-deployment-c7997dcc8-fctlz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fctlz webserver-deployment-c7997dcc8- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-c7997dcc8-fctlz 25306e46-6fe4-4a39-af25-58c474f10f81 16216444 0 2020-05-14 22:00:42 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c40d4184-6a0f-4df0-89a3-35613998d315 0xc0052da9b7 0xc0052da9b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.365: INFO: Pod "webserver-deployment-c7997dcc8-fs6tt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fs6tt webserver-deployment-c7997dcc8- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-c7997dcc8-fs6tt 94a4e43a-e3fe-4982-8faa-cf8f5ad9686f 16216518 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c40d4184-6a0f-4df0-89a3-35613998d315 0xc0052dab57 0xc0052dab58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-14 22:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.365: INFO: Pod "webserver-deployment-c7997dcc8-ggbnc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ggbnc webserver-deployment-c7997dcc8- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-c7997dcc8-ggbnc 5db228c4-db2a-4fb0-a519-f58cf8923c10 16216517 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c40d4184-6a0f-4df0-89a3-35613998d315 0xc0052dad77 0xc0052dad78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-14 22:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.366: INFO: Pod "webserver-deployment-c7997dcc8-hrbpr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hrbpr webserver-deployment-c7997dcc8- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-c7997dcc8-hrbpr d9f1db85-3ee5-462b-a777-3d6d1168f55b 16216445 0 2020-05-14 22:00:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c40d4184-6a0f-4df0-89a3-35613998d315 0xc0052dafc7 0xc0052dafc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.118,StartTime:2020-05-14 22:00:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.118,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.366: INFO: Pod "webserver-deployment-c7997dcc8-mbwqh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mbwqh webserver-deployment-c7997dcc8- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-c7997dcc8-mbwqh fda520b2-78cf-46e2-9a75-930e814b430c 16216483 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c40d4184-6a0f-4df0-89a3-35613998d315 0xc0052db277 0xc0052db278}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-14 22:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.366: INFO: Pod "webserver-deployment-c7997dcc8-q8j8z" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q8j8z webserver-deployment-c7997dcc8- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-c7997dcc8-q8j8z 2361208b-25ea-43a8-bbdf-302bf6c14271 16216373 0 2020-05-14 22:00:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c40d4184-6a0f-4df0-89a3-35613998d315 0xc0052db4b7 0xc0052db4b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-14 22:00:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.366: INFO: Pod "webserver-deployment-c7997dcc8-w8497" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w8497 webserver-deployment-c7997dcc8- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-c7997dcc8-w8497 42494dcc-0a08-45d2-bc57-f695b02d5fc1 16216463 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c40d4184-6a0f-4df0-89a3-35613998d315 0xc0052db697 0xc0052db698}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-14 22:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.366: INFO: Pod "webserver-deployment-c7997dcc8-ww8b5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ww8b5 webserver-deployment-c7997dcc8- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-c7997dcc8-ww8b5 a2def27b-77a2-4bc6-9d14-e780ff0970a8 16216494 0 2020-05-14 22:00:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c40d4184-6a0f-4df0-89a3-35613998d315 0xc0052db837 0xc0052db838}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-14 22:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.366: INFO: Pod "webserver-deployment-c7997dcc8-xms24" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xms24 webserver-deployment-c7997dcc8- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-c7997dcc8-xms24 3b2ad916-67f3-4f5f-8633-faf11b3cd8f7 16216368 0 2020-05-14 22:00:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c40d4184-6a0f-4df0-89a3-35613998d315 0xc0052db9f7 0xc0052db9f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-14 22:00:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:00:45.367: INFO: Pod "webserver-deployment-c7997dcc8-ztwsk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ztwsk webserver-deployment-c7997dcc8- deployment-6885 /api/v1/namespaces/deployment-6885/pods/webserver-deployment-c7997dcc8-ztwsk bb0dd94e-75ae-4b32-9976-5bb305f64da7 16216349 0 2020-05-14 22:00:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c40d4184-6a0f-4df0-89a3-35613998d315 0xc0052dbbe7 0xc0052dbbe8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dgrrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dgrrz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dgrrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:00:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-14 22:00:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:00:45.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6885" for this suite. • [SLOW TEST:22.081 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":151,"skipped":2599,"failed":0} [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:00:46.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 14 22:00:48.891: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4559" to be "success or failure" May 14 22:00:48.931: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 39.698467ms May 14 22:00:51.342: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.451345098s May 14 22:00:53.683: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.792251197s May 14 22:00:56.282: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 7.390666485s May 14 22:00:58.759: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.868356657s May 14 22:01:01.103: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.211836225s May 14 22:01:03.542: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.65087123s May 14 22:01:05.605: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.714002952s STEP: Saw pod success May 14 22:01:05.605: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 14 22:01:05.692: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 14 22:01:06.383: INFO: Waiting for pod pod-host-path-test to disappear May 14 22:01:06.440: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:01:06.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4559" for this suite. • [SLOW TEST:20.546 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2599,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:01:06.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 14 22:01:08.415: INFO: Waiting up to 5m0s for pod "pod-5a8e9902-99c1-441f-9a85-d438ca23628d" in namespace "emptydir-1854" to be "success or failure" May 14 22:01:08.471: INFO: Pod "pod-5a8e9902-99c1-441f-9a85-d438ca23628d": Phase="Pending", Reason="", readiness=false. Elapsed: 55.943578ms May 14 22:01:10.555: INFO: Pod "pod-5a8e9902-99c1-441f-9a85-d438ca23628d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139599385s May 14 22:01:12.675: INFO: Pod "pod-5a8e9902-99c1-441f-9a85-d438ca23628d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.260032503s May 14 22:01:14.752: INFO: Pod "pod-5a8e9902-99c1-441f-9a85-d438ca23628d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.336239132s STEP: Saw pod success May 14 22:01:14.752: INFO: Pod "pod-5a8e9902-99c1-441f-9a85-d438ca23628d" satisfied condition "success or failure" May 14 22:01:14.899: INFO: Trying to get logs from node jerma-worker2 pod pod-5a8e9902-99c1-441f-9a85-d438ca23628d container test-container: STEP: delete the pod May 14 22:01:14.998: INFO: Waiting for pod pod-5a8e9902-99c1-441f-9a85-d438ca23628d to disappear May 14 22:01:15.088: INFO: Pod pod-5a8e9902-99c1-441f-9a85-d438ca23628d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:01:15.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1854" for this suite. • [SLOW TEST:8.294 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2618,"failed":0} [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:01:15.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 14 22:01:23.699: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 22:01:23.723: INFO: Pod pod-with-prestop-http-hook still exists May 14 22:01:25.723: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 22:01:25.809: INFO: Pod pod-with-prestop-http-hook still exists May 14 22:01:27.723: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 22:01:27.749: INFO: Pod pod-with-prestop-http-hook still exists May 14 22:01:29.723: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 22:01:29.727: INFO: Pod pod-with-prestop-http-hook still exists May 14 22:01:31.723: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 22:01:31.727: INFO: Pod pod-with-prestop-http-hook still exists May 14 22:01:33.723: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 22:01:33.727: INFO: Pod pod-with-prestop-http-hook still exists May 14 22:01:35.723: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 22:01:35.727: INFO: Pod pod-with-prestop-http-hook still exists May 14 22:01:37.723: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 22:01:37.727: INFO: Pod pod-with-prestop-http-hook still exists May 14 22:01:39.723: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 22:01:39.726: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:01:39.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8210" for this suite. • [SLOW TEST:24.508 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2618,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:01:39.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-jdw6 STEP: Creating a pod to test atomic-volume-subpath May 14 22:01:40.060: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jdw6" in namespace "subpath-67" to be "success or failure" May 14 22:01:40.120: INFO: Pod "pod-subpath-test-configmap-jdw6": Phase="Pending", Reason="", readiness=false. Elapsed: 60.134822ms May 14 22:01:42.229: INFO: Pod "pod-subpath-test-configmap-jdw6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169203204s May 14 22:01:44.233: INFO: Pod "pod-subpath-test-configmap-jdw6": Phase="Running", Reason="", readiness=true. Elapsed: 4.173329802s May 14 22:01:46.238: INFO: Pod "pod-subpath-test-configmap-jdw6": Phase="Running", Reason="", readiness=true. Elapsed: 6.178167846s May 14 22:01:48.242: INFO: Pod "pod-subpath-test-configmap-jdw6": Phase="Running", Reason="", readiness=true. Elapsed: 8.182485971s May 14 22:01:50.245: INFO: Pod "pod-subpath-test-configmap-jdw6": Phase="Running", Reason="", readiness=true. Elapsed: 10.185484284s May 14 22:01:52.250: INFO: Pod "pod-subpath-test-configmap-jdw6": Phase="Running", Reason="", readiness=true. Elapsed: 12.190452806s May 14 22:01:54.254: INFO: Pod "pod-subpath-test-configmap-jdw6": Phase="Running", Reason="", readiness=true. Elapsed: 14.19425318s May 14 22:01:56.259: INFO: Pod "pod-subpath-test-configmap-jdw6": Phase="Running", Reason="", readiness=true. Elapsed: 16.19892754s May 14 22:01:58.263: INFO: Pod "pod-subpath-test-configmap-jdw6": Phase="Running", Reason="", readiness=true. Elapsed: 18.203331865s May 14 22:02:00.267: INFO: Pod "pod-subpath-test-configmap-jdw6": Phase="Running", Reason="", readiness=true. Elapsed: 20.207748586s May 14 22:02:02.271: INFO: Pod "pod-subpath-test-configmap-jdw6": Phase="Running", Reason="", readiness=true. Elapsed: 22.211402031s May 14 22:02:04.275: INFO: Pod "pod-subpath-test-configmap-jdw6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.214951174s STEP: Saw pod success May 14 22:02:04.275: INFO: Pod "pod-subpath-test-configmap-jdw6" satisfied condition "success or failure" May 14 22:02:04.277: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-jdw6 container test-container-subpath-configmap-jdw6: STEP: delete the pod May 14 22:02:04.326: INFO: Waiting for pod pod-subpath-test-configmap-jdw6 to disappear May 14 22:02:04.462: INFO: Pod pod-subpath-test-configmap-jdw6 no longer exists STEP: Deleting pod pod-subpath-test-configmap-jdw6 May 14 22:02:04.462: INFO: Deleting pod "pod-subpath-test-configmap-jdw6" in namespace "subpath-67" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:02:04.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-67" for this suite. • [SLOW TEST:24.729 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":155,"skipped":2624,"failed":0} SS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:02:04.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-a2236a52-95a4-4561-98e1-f9122f970029 in namespace container-probe-2054 May 14 22:02:08.695: INFO: Started pod busybox-a2236a52-95a4-4561-98e1-f9122f970029 in namespace container-probe-2054 STEP: checking the pod's current state and verifying that restartCount is present May 14 22:02:08.698: INFO: Initial restart count of pod busybox-a2236a52-95a4-4561-98e1-f9122f970029 is 0 May 14 22:02:57.004: INFO: Restart count of pod container-probe-2054/busybox-a2236a52-95a4-4561-98e1-f9122f970029 is now 1 (48.305598314s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:02:57.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2054" for this suite. • [SLOW TEST:52.590 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2626,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:02:57.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-3d6aae77-f141-4128-98ef-ab4fc43b4b84 STEP: Creating a pod to test consume configMaps May 14 22:02:57.133: INFO: Waiting up to 5m0s for pod "pod-configmaps-37f96f31-328e-4009-b525-e5fc5f87380f" in namespace "configmap-2766" to be "success or failure" May 14 22:02:57.150: INFO: Pod "pod-configmaps-37f96f31-328e-4009-b525-e5fc5f87380f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.164682ms May 14 22:02:59.174: INFO: Pod "pod-configmaps-37f96f31-328e-4009-b525-e5fc5f87380f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040641718s May 14 22:03:01.260: INFO: Pod "pod-configmaps-37f96f31-328e-4009-b525-e5fc5f87380f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12636216s STEP: Saw pod success May 14 22:03:01.260: INFO: Pod "pod-configmaps-37f96f31-328e-4009-b525-e5fc5f87380f" satisfied condition "success or failure" May 14 22:03:01.262: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-37f96f31-328e-4009-b525-e5fc5f87380f container configmap-volume-test: STEP: delete the pod May 14 22:03:01.283: INFO: Waiting for pod pod-configmaps-37f96f31-328e-4009-b525-e5fc5f87380f to disappear May 14 22:03:01.287: INFO: Pod pod-configmaps-37f96f31-328e-4009-b525-e5fc5f87380f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:03:01.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2766" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2638,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:03:01.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 14 22:03:01.393: INFO: Waiting up to 5m0s for pod "downward-api-0cfc756e-e33d-442b-8e8d-853f528f660c" in namespace "downward-api-174" to be "success or failure" May 14 22:03:01.401: INFO: Pod "downward-api-0cfc756e-e33d-442b-8e8d-853f528f660c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09218ms May 14 22:03:03.405: INFO: Pod "downward-api-0cfc756e-e33d-442b-8e8d-853f528f660c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012048781s May 14 22:03:05.408: INFO: Pod "downward-api-0cfc756e-e33d-442b-8e8d-853f528f660c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014861393s STEP: Saw pod success May 14 22:03:05.408: INFO: Pod "downward-api-0cfc756e-e33d-442b-8e8d-853f528f660c" satisfied condition "success or failure" May 14 22:03:05.410: INFO: Trying to get logs from node jerma-worker pod downward-api-0cfc756e-e33d-442b-8e8d-853f528f660c container dapi-container: STEP: delete the pod May 14 22:03:05.462: INFO: Waiting for pod downward-api-0cfc756e-e33d-442b-8e8d-853f528f660c to disappear May 14 22:03:05.473: INFO: Pod downward-api-0cfc756e-e33d-442b-8e8d-853f528f660c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:03:05.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-174" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2651,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:03:05.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 14 22:03:05.662: INFO: Waiting up to 5m0s for pod "pod-d9aea56e-d0e8-4d65-9bad-eb5f9bbaa435" in namespace "emptydir-9878" to be "success or failure" May 14 22:03:05.670: INFO: Pod "pod-d9aea56e-d0e8-4d65-9bad-eb5f9bbaa435": Phase="Pending", Reason="", readiness=false. Elapsed: 8.452326ms May 14 22:03:07.674: INFO: Pod "pod-d9aea56e-d0e8-4d65-9bad-eb5f9bbaa435": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012224578s May 14 22:03:09.678: INFO: Pod "pod-d9aea56e-d0e8-4d65-9bad-eb5f9bbaa435": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016125485s STEP: Saw pod success May 14 22:03:09.678: INFO: Pod "pod-d9aea56e-d0e8-4d65-9bad-eb5f9bbaa435" satisfied condition "success or failure" May 14 22:03:09.680: INFO: Trying to get logs from node jerma-worker2 pod pod-d9aea56e-d0e8-4d65-9bad-eb5f9bbaa435 container test-container: STEP: delete the pod May 14 22:03:09.719: INFO: Waiting for pod pod-d9aea56e-d0e8-4d65-9bad-eb5f9bbaa435 to disappear May 14 22:03:09.786: INFO: Pod pod-d9aea56e-d0e8-4d65-9bad-eb5f9bbaa435 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:03:09.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9878" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2660,"failed":0} ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:03:09.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-6kb9 STEP: Creating a pod to test atomic-volume-subpath May 14 22:03:10.098: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-6kb9" in namespace "subpath-5555" to be "success or failure" May 14 22:03:10.102: INFO: Pod "pod-subpath-test-secret-6kb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.520317ms May 14 22:03:12.123: INFO: Pod "pod-subpath-test-secret-6kb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02488008s May 14 22:03:14.127: INFO: Pod "pod-subpath-test-secret-6kb9": Phase="Running", Reason="", readiness=true. Elapsed: 4.028837625s May 14 22:03:16.131: INFO: Pod "pod-subpath-test-secret-6kb9": Phase="Running", Reason="", readiness=true. Elapsed: 6.033077275s May 14 22:03:18.134: INFO: Pod "pod-subpath-test-secret-6kb9": Phase="Running", Reason="", readiness=true. Elapsed: 8.036814937s May 14 22:03:20.138: INFO: Pod "pod-subpath-test-secret-6kb9": Phase="Running", Reason="", readiness=true. Elapsed: 10.04017041s May 14 22:03:22.142: INFO: Pod "pod-subpath-test-secret-6kb9": Phase="Running", Reason="", readiness=true. Elapsed: 12.044020007s May 14 22:03:24.146: INFO: Pod "pod-subpath-test-secret-6kb9": Phase="Running", Reason="", readiness=true. Elapsed: 14.048498557s May 14 22:03:26.151: INFO: Pod "pod-subpath-test-secret-6kb9": Phase="Running", Reason="", readiness=true. Elapsed: 16.053082584s May 14 22:03:28.155: INFO: Pod "pod-subpath-test-secret-6kb9": Phase="Running", Reason="", readiness=true. Elapsed: 18.057012623s May 14 22:03:30.158: INFO: Pod "pod-subpath-test-secret-6kb9": Phase="Running", Reason="", readiness=true. Elapsed: 20.060577211s May 14 22:03:32.162: INFO: Pod "pod-subpath-test-secret-6kb9": Phase="Running", Reason="", readiness=true. Elapsed: 22.063887535s May 14 22:03:34.167: INFO: Pod "pod-subpath-test-secret-6kb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.068983721s STEP: Saw pod success May 14 22:03:34.167: INFO: Pod "pod-subpath-test-secret-6kb9" satisfied condition "success or failure" May 14 22:03:34.170: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-6kb9 container test-container-subpath-secret-6kb9: STEP: delete the pod May 14 22:03:34.219: INFO: Waiting for pod pod-subpath-test-secret-6kb9 to disappear May 14 22:03:34.228: INFO: Pod pod-subpath-test-secret-6kb9 no longer exists STEP: Deleting pod pod-subpath-test-secret-6kb9 May 14 22:03:34.228: INFO: Deleting pod "pod-subpath-test-secret-6kb9" in namespace "subpath-5555" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:03:34.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5555" for this suite. • [SLOW TEST:24.442 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":160,"skipped":2660,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:03:34.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:03:38.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7644" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2662,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:03:38.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 14 22:03:38.420: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 14 22:03:38.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3264' May 14 22:03:38.779: INFO: stderr: "" May 14 22:03:38.779: INFO: stdout: "service/agnhost-slave created\n" May 14 22:03:38.779: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 14 22:03:38.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3264' May 14 22:03:39.159: INFO: stderr: "" May 14 22:03:39.159: INFO: stdout: "service/agnhost-master created\n" May 14 22:03:39.159: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 14 22:03:39.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3264' May 14 22:03:39.494: INFO: stderr: "" May 14 22:03:39.494: INFO: stdout: "service/frontend created\n" May 14 22:03:39.494: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 14 22:03:39.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3264' May 14 22:03:39.771: INFO: stderr: "" May 14 22:03:39.771: INFO: stdout: "deployment.apps/frontend created\n" May 14 22:03:39.771: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 14 22:03:39.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3264' May 14 22:03:40.065: INFO: stderr: "" May 14 22:03:40.065: INFO: stdout: "deployment.apps/agnhost-master created\n" May 14 22:03:40.065: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 14 22:03:40.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3264' May 14 22:03:40.342: INFO: stderr: "" May 14 22:03:40.342: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 14 22:03:40.342: INFO: Waiting for all frontend pods to be Running. May 14 22:03:50.392: INFO: Waiting for frontend to serve content. May 14 22:03:50.404: INFO: Trying to add a new entry to the guestbook. May 14 22:03:50.415: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 14 22:03:50.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3264' May 14 22:03:50.594: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 22:03:50.594: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 14 22:03:50.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3264' May 14 22:03:50.785: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 22:03:50.786: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 14 22:03:50.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3264' May 14 22:03:50.942: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 22:03:50.942: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 14 22:03:50.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3264' May 14 22:03:51.054: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 22:03:51.054: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 14 22:03:51.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3264' May 14 22:03:51.157: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 22:03:51.157: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 14 22:03:51.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3264' May 14 22:03:51.265: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 22:03:51.265: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:03:51.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3264" for this suite. • [SLOW TEST:12.904 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":162,"skipped":2693,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:03:51.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 14 22:04:05.592: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8397 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 22:04:05.592: INFO: >>> kubeConfig: /root/.kube/config I0514 22:04:05.628740 6 log.go:172] (0xc001df8790) (0xc002848500) Create stream I0514 22:04:05.628788 6 log.go:172] (0xc001df8790) (0xc002848500) Stream added, broadcasting: 1 I0514 22:04:05.631214 6 log.go:172] (0xc001df8790) Reply frame received for 1 I0514 22:04:05.631281 6 log.go:172] (0xc001df8790) (0xc000100fa0) Create stream I0514 22:04:05.631308 6 log.go:172] (0xc001df8790) (0xc000100fa0) Stream added, broadcasting: 3 I0514 22:04:05.632249 6 log.go:172] (0xc001df8790) Reply frame received for 3 I0514 22:04:05.632293 6 log.go:172] (0xc001df8790) (0xc0001014a0) Create stream I0514 22:04:05.632305 6 log.go:172] (0xc001df8790) (0xc0001014a0) Stream added, broadcasting: 5 I0514 22:04:05.633350 6 log.go:172] (0xc001df8790) Reply frame received for 5 I0514 22:04:05.703120 6 log.go:172] (0xc001df8790) Data frame received for 5 I0514 22:04:05.703159 6 log.go:172] (0xc0001014a0) (5) Data frame handling I0514 22:04:05.703189 6 log.go:172] (0xc001df8790) Data frame received for 3 I0514 22:04:05.703214 6 log.go:172] (0xc000100fa0) (3) Data frame handling I0514 22:04:05.703229 6 log.go:172] (0xc000100fa0) (3) Data frame sent I0514 22:04:05.703239 6 log.go:172] (0xc001df8790) Data frame received for 3 I0514 22:04:05.703251 6 log.go:172] (0xc000100fa0) (3) Data frame handling I0514 22:04:05.704510 6 log.go:172] (0xc001df8790) Data frame received for 1 I0514 22:04:05.704562 6 log.go:172] (0xc002848500) (1) Data frame handling I0514 22:04:05.704600 6 log.go:172] (0xc002848500) (1) Data frame sent I0514 22:04:05.704631 6 log.go:172] (0xc001df8790) (0xc002848500) Stream removed, broadcasting: 1 I0514 22:04:05.704652 6 log.go:172] (0xc001df8790) Go away received I0514 22:04:05.704761 6 log.go:172] (0xc001df8790) (0xc002848500) Stream removed, broadcasting: 1 I0514 22:04:05.704783 6 log.go:172] (0xc001df8790) (0xc000100fa0) Stream removed, broadcasting: 3 I0514 22:04:05.704791 6 log.go:172] (0xc001df8790) (0xc0001014a0) Stream removed, broadcasting: 5 May 14 22:04:05.704: INFO: Exec stderr: "" May 14 22:04:05.704: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8397 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 22:04:05.704: INFO: >>> kubeConfig: /root/.kube/config I0514 22:04:05.732819 6 log.go:172] (0xc0023ae370) (0xc0022b6320) Create stream I0514 22:04:05.732851 6 log.go:172] (0xc0023ae370) (0xc0022b6320) Stream added, broadcasting: 1 I0514 22:04:05.734651 6 log.go:172] (0xc0023ae370) Reply frame received for 1 I0514 22:04:05.734698 6 log.go:172] (0xc0023ae370) (0xc0012d0b40) Create stream I0514 22:04:05.734712 6 log.go:172] (0xc0023ae370) (0xc0012d0b40) Stream added, broadcasting: 3 I0514 22:04:05.735790 6 log.go:172] (0xc0023ae370) Reply frame received for 3 I0514 22:04:05.735840 6 log.go:172] (0xc0023ae370) (0xc000dce0a0) Create stream I0514 22:04:05.735859 6 log.go:172] (0xc0023ae370) (0xc000dce0a0) Stream added, broadcasting: 5 I0514 22:04:05.736786 6 log.go:172] (0xc0023ae370) Reply frame received for 5 I0514 22:04:05.804508 6 log.go:172] (0xc0023ae370) Data frame received for 5 I0514 22:04:05.804546 6 log.go:172] (0xc000dce0a0) (5) Data frame handling I0514 22:04:05.804572 6 log.go:172] (0xc0023ae370) Data frame received for 3 I0514 22:04:05.804585 6 log.go:172] (0xc0012d0b40) (3) Data frame handling I0514 22:04:05.804596 6 log.go:172] (0xc0012d0b40) (3) Data frame sent I0514 22:04:05.804608 6 log.go:172] (0xc0023ae370) Data frame received for 3 I0514 22:04:05.804626 6 log.go:172] (0xc0012d0b40) (3) Data frame handling I0514 22:04:05.805986 6 log.go:172] (0xc0023ae370) Data frame received for 1 I0514 22:04:05.806007 6 log.go:172] (0xc0022b6320) (1) Data frame handling I0514 22:04:05.806019 6 log.go:172] (0xc0022b6320) (1) Data frame sent I0514 22:04:05.806036 6 log.go:172] (0xc0023ae370) (0xc0022b6320) Stream removed, broadcasting: 1 I0514 22:04:05.806054 6 log.go:172] (0xc0023ae370) Go away received I0514 22:04:05.806203 6 log.go:172] (0xc0023ae370) (0xc0022b6320) Stream removed, broadcasting: 1 I0514 22:04:05.806225 6 log.go:172] (0xc0023ae370) (0xc0012d0b40) Stream removed, broadcasting: 3 I0514 22:04:05.806239 6 log.go:172] (0xc0023ae370) (0xc000dce0a0) Stream removed, broadcasting: 5 May 14 22:04:05.806: INFO: Exec stderr: "" May 14 22:04:05.806: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8397 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 22:04:05.806: INFO: >>> kubeConfig: /root/.kube/config I0514 22:04:05.838700 6 log.go:172] (0xc002794580) (0xc0012d12c0) Create stream I0514 22:04:05.838744 6 log.go:172] (0xc002794580) (0xc0012d12c0) Stream added, broadcasting: 1 I0514 22:04:05.840970 6 log.go:172] (0xc002794580) Reply frame received for 1 I0514 22:04:05.840993 6 log.go:172] (0xc002794580) (0xc000d3bd60) Create stream I0514 22:04:05.841001 6 log.go:172] (0xc002794580) (0xc000d3bd60) Stream added, broadcasting: 3 I0514 22:04:05.842174 6 log.go:172] (0xc002794580) Reply frame received for 3 I0514 22:04:05.842215 6 log.go:172] (0xc002794580) (0xc000dce140) Create stream I0514 22:04:05.842228 6 log.go:172] (0xc002794580) (0xc000dce140) Stream added, broadcasting: 5 I0514 22:04:05.843620 6 log.go:172] (0xc002794580) Reply frame received for 5 I0514 22:04:05.907911 6 log.go:172] (0xc002794580) Data frame received for 3 I0514 22:04:05.907961 6 log.go:172] (0xc000d3bd60) (3) Data frame handling I0514 22:04:05.907971 6 log.go:172] (0xc000d3bd60) (3) Data frame sent I0514 22:04:05.907983 6 log.go:172] (0xc002794580) Data frame received for 3 I0514 22:04:05.907988 6 log.go:172] (0xc000d3bd60) (3) Data frame handling I0514 22:04:05.908020 6 log.go:172] (0xc002794580) Data frame received for 5 I0514 22:04:05.908061 6 log.go:172] (0xc000dce140) (5) Data frame handling I0514 22:04:05.909733 6 log.go:172] (0xc002794580) Data frame received for 1 I0514 22:04:05.909767 6 log.go:172] (0xc0012d12c0) (1) Data frame handling I0514 22:04:05.909795 6 log.go:172] (0xc0012d12c0) (1) Data frame sent I0514 22:04:05.909825 6 log.go:172] (0xc002794580) (0xc0012d12c0) Stream removed, broadcasting: 1 I0514 22:04:05.909855 6 log.go:172] (0xc002794580) Go away received I0514 22:04:05.909950 6 log.go:172] (0xc002794580) (0xc0012d12c0) Stream removed, broadcasting: 1 I0514 22:04:05.909976 6 log.go:172] (0xc002794580) (0xc000d3bd60) Stream removed, broadcasting: 3 I0514 22:04:05.909994 6 log.go:172] (0xc002794580) (0xc000dce140) Stream removed, broadcasting: 5 May 14 22:04:05.910: INFO: Exec stderr: "" May 14 22:04:05.910: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8397 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 22:04:05.910: INFO: >>> kubeConfig: /root/.kube/config I0514 22:04:05.940766 6 log.go:172] (0xc001df8dc0) (0xc002848820) Create stream I0514 22:04:05.940795 6 log.go:172] (0xc001df8dc0) (0xc002848820) Stream added, broadcasting: 1 I0514 22:04:05.942897 6 log.go:172] (0xc001df8dc0) Reply frame received for 1 I0514 22:04:05.942934 6 log.go:172] (0xc001df8dc0) (0xc000dce1e0) Create stream I0514 22:04:05.942949 6 log.go:172] (0xc001df8dc0) (0xc000dce1e0) Stream added, broadcasting: 3 I0514 22:04:05.944005 6 log.go:172] (0xc001df8dc0) Reply frame received for 3 I0514 22:04:05.944051 6 log.go:172] (0xc001df8dc0) (0xc0012d1540) Create stream I0514 22:04:05.944070 6 log.go:172] (0xc001df8dc0) (0xc0012d1540) Stream added, broadcasting: 5 I0514 22:04:05.945472 6 log.go:172] (0xc001df8dc0) Reply frame received for 5 I0514 22:04:06.005906 6 log.go:172] (0xc001df8dc0) Data frame received for 5 I0514 22:04:06.005955 6 log.go:172] (0xc0012d1540) (5) Data frame handling I0514 22:04:06.005989 6 log.go:172] (0xc001df8dc0) Data frame received for 3 I0514 22:04:06.006008 6 log.go:172] (0xc000dce1e0) (3) Data frame handling I0514 22:04:06.006025 6 log.go:172] (0xc000dce1e0) (3) Data frame sent I0514 22:04:06.006040 6 log.go:172] (0xc001df8dc0) Data frame received for 3 I0514 22:04:06.006053 6 log.go:172] (0xc000dce1e0) (3) Data frame handling I0514 22:04:06.007598 6 log.go:172] (0xc001df8dc0) Data frame received for 1 I0514 22:04:06.007615 6 log.go:172] (0xc002848820) (1) Data frame handling I0514 22:04:06.007630 6 log.go:172] (0xc002848820) (1) Data frame sent I0514 22:04:06.007639 6 log.go:172] (0xc001df8dc0) (0xc002848820) Stream removed, broadcasting: 1 I0514 22:04:06.007746 6 log.go:172] (0xc001df8dc0) (0xc002848820) Stream removed, broadcasting: 1 I0514 22:04:06.007764 6 log.go:172] (0xc001df8dc0) (0xc000dce1e0) Stream removed, broadcasting: 3 I0514 22:04:06.007830 6 log.go:172] (0xc001df8dc0) Go away received I0514 22:04:06.007889 6 log.go:172] (0xc001df8dc0) (0xc0012d1540) Stream removed, broadcasting: 5 May 14 22:04:06.007: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 14 22:04:06.007: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8397 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 22:04:06.007: INFO: >>> kubeConfig: /root/.kube/config I0514 22:04:06.042693 6 log.go:172] (0xc0023ae9a0) (0xc0022b65a0) Create stream I0514 22:04:06.042715 6 log.go:172] (0xc0023ae9a0) (0xc0022b65a0) Stream added, broadcasting: 1 I0514 22:04:06.044466 6 log.go:172] (0xc0023ae9a0) Reply frame received for 1 I0514 22:04:06.044506 6 log.go:172] (0xc0023ae9a0) (0xc0028488c0) Create stream I0514 22:04:06.044522 6 log.go:172] (0xc0023ae9a0) (0xc0028488c0) Stream added, broadcasting: 3 I0514 22:04:06.045637 6 log.go:172] (0xc0023ae9a0) Reply frame received for 3 I0514 22:04:06.045682 6 log.go:172] (0xc0023ae9a0) (0xc002848aa0) Create stream I0514 22:04:06.045698 6 log.go:172] (0xc0023ae9a0) (0xc002848aa0) Stream added, broadcasting: 5 I0514 22:04:06.046586 6 log.go:172] (0xc0023ae9a0) Reply frame received for 5 I0514 22:04:06.108327 6 log.go:172] (0xc0023ae9a0) Data frame received for 3 I0514 22:04:06.108356 6 log.go:172] (0xc0028488c0) (3) Data frame handling I0514 22:04:06.108365 6 log.go:172] (0xc0028488c0) (3) Data frame sent I0514 22:04:06.108371 6 log.go:172] (0xc0023ae9a0) Data frame received for 3 I0514 22:04:06.108376 6 log.go:172] (0xc0028488c0) (3) Data frame handling I0514 22:04:06.108398 6 log.go:172] (0xc0023ae9a0) Data frame received for 5 I0514 22:04:06.108415 6 log.go:172] (0xc002848aa0) (5) Data frame handling I0514 22:04:06.109629 6 log.go:172] (0xc0023ae9a0) Data frame received for 1 I0514 22:04:06.109648 6 log.go:172] (0xc0022b65a0) (1) Data frame handling I0514 22:04:06.109658 6 log.go:172] (0xc0022b65a0) (1) Data frame sent I0514 22:04:06.109671 6 log.go:172] (0xc0023ae9a0) (0xc0022b65a0) Stream removed, broadcasting: 1 I0514 22:04:06.109782 6 log.go:172] (0xc0023ae9a0) Go away received I0514 22:04:06.109825 6 log.go:172] (0xc0023ae9a0) (0xc0022b65a0) Stream removed, broadcasting: 1 I0514 22:04:06.109855 6 log.go:172] (0xc0023ae9a0) (0xc0028488c0) Stream removed, broadcasting: 3 I0514 22:04:06.109868 6 log.go:172] (0xc0023ae9a0) (0xc002848aa0) Stream removed, broadcasting: 5 May 14 22:04:06.109: INFO: Exec stderr: "" May 14 22:04:06.109: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8397 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 22:04:06.109: INFO: >>> kubeConfig: /root/.kube/config I0514 22:04:06.145738 6 log.go:172] (0xc0023af1e0) (0xc0022b68c0) Create stream I0514 22:04:06.145772 6 log.go:172] (0xc0023af1e0) (0xc0022b68c0) Stream added, broadcasting: 1 I0514 22:04:06.147954 6 log.go:172] (0xc0023af1e0) Reply frame received for 1 I0514 22:04:06.147996 6 log.go:172] (0xc0023af1e0) (0xc002848be0) Create stream I0514 22:04:06.148013 6 log.go:172] (0xc0023af1e0) (0xc002848be0) Stream added, broadcasting: 3 I0514 22:04:06.148966 6 log.go:172] (0xc0023af1e0) Reply frame received for 3 I0514 22:04:06.149000 6 log.go:172] (0xc0023af1e0) (0xc000dce320) Create stream I0514 22:04:06.149011 6 log.go:172] (0xc0023af1e0) (0xc000dce320) Stream added, broadcasting: 5 I0514 22:04:06.150501 6 log.go:172] (0xc0023af1e0) Reply frame received for 5 I0514 22:04:06.219409 6 log.go:172] (0xc0023af1e0) Data frame received for 3 I0514 22:04:06.219443 6 log.go:172] (0xc002848be0) (3) Data frame handling I0514 22:04:06.219474 6 log.go:172] (0xc002848be0) (3) Data frame sent I0514 22:04:06.219490 6 log.go:172] (0xc0023af1e0) Data frame received for 3 I0514 22:04:06.219505 6 log.go:172] (0xc002848be0) (3) Data frame handling I0514 22:04:06.219602 6 log.go:172] (0xc0023af1e0) Data frame received for 5 I0514 22:04:06.219635 6 log.go:172] (0xc000dce320) (5) Data frame handling I0514 22:04:06.221426 6 log.go:172] (0xc0023af1e0) Data frame received for 1 I0514 22:04:06.221449 6 log.go:172] (0xc0022b68c0) (1) Data frame handling I0514 22:04:06.221463 6 log.go:172] (0xc0022b68c0) (1) Data frame sent I0514 22:04:06.221478 6 log.go:172] (0xc0023af1e0) (0xc0022b68c0) Stream removed, broadcasting: 1 I0514 22:04:06.221585 6 log.go:172] (0xc0023af1e0) (0xc0022b68c0) Stream removed, broadcasting: 1 I0514 22:04:06.221604 6 log.go:172] (0xc0023af1e0) (0xc002848be0) Stream removed, broadcasting: 3 I0514 22:04:06.221759 6 log.go:172] (0xc0023af1e0) (0xc000dce320) Stream removed, broadcasting: 5 May 14 22:04:06.222: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 14 22:04:06.222: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8397 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 22:04:06.222: INFO: >>> kubeConfig: /root/.kube/config I0514 22:04:06.224166 6 log.go:172] (0xc0023af1e0) Go away received I0514 22:04:06.257902 6 log.go:172] (0xc0023af550) (0xc0022b6c80) Create stream I0514 22:04:06.257929 6 log.go:172] (0xc0023af550) (0xc0022b6c80) Stream added, broadcasting: 1 I0514 22:04:06.260423 6 log.go:172] (0xc0023af550) Reply frame received for 1 I0514 22:04:06.260468 6 log.go:172] (0xc0023af550) (0xc0012d15e0) Create stream I0514 22:04:06.260485 6 log.go:172] (0xc0023af550) (0xc0012d15e0) Stream added, broadcasting: 3 I0514 22:04:06.261741 6 log.go:172] (0xc0023af550) Reply frame received for 3 I0514 22:04:06.261787 6 log.go:172] (0xc0023af550) (0xc0012d1720) Create stream I0514 22:04:06.261800 6 log.go:172] (0xc0023af550) (0xc0012d1720) Stream added, broadcasting: 5 I0514 22:04:06.262835 6 log.go:172] (0xc0023af550) Reply frame received for 5 I0514 22:04:06.322320 6 log.go:172] (0xc0023af550) Data frame received for 5 I0514 22:04:06.322370 6 log.go:172] (0xc0012d1720) (5) Data frame handling I0514 22:04:06.322398 6 log.go:172] (0xc0023af550) Data frame received for 3 I0514 22:04:06.322412 6 log.go:172] (0xc0012d15e0) (3) Data frame handling I0514 22:04:06.322428 6 log.go:172] (0xc0012d15e0) (3) Data frame sent I0514 22:04:06.322443 6 log.go:172] (0xc0023af550) Data frame received for 3 I0514 22:04:06.322455 6 log.go:172] (0xc0012d15e0) (3) Data frame handling I0514 22:04:06.323988 6 log.go:172] (0xc0023af550) Data frame received for 1 I0514 22:04:06.324026 6 log.go:172] (0xc0022b6c80) (1) Data frame handling I0514 22:04:06.324055 6 log.go:172] (0xc0022b6c80) (1) Data frame sent I0514 22:04:06.324074 6 log.go:172] (0xc0023af550) (0xc0022b6c80) Stream removed, broadcasting: 1 I0514 22:04:06.324095 6 log.go:172] (0xc0023af550) Go away received I0514 22:04:06.324241 6 log.go:172] (0xc0023af550) (0xc0022b6c80) Stream removed, broadcasting: 1 I0514 22:04:06.324271 6 log.go:172] (0xc0023af550) (0xc0012d15e0) Stream removed, broadcasting: 3 I0514 22:04:06.324299 6 log.go:172] (0xc0023af550) (0xc0012d1720) Stream removed, broadcasting: 5 May 14 22:04:06.324: INFO: Exec stderr: "" May 14 22:04:06.324: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8397 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 22:04:06.324: INFO: >>> kubeConfig: /root/.kube/config I0514 22:04:06.355531 6 log.go:172] (0xc0023afb80) (0xc0022b7360) Create stream I0514 22:04:06.355570 6 log.go:172] (0xc0023afb80) (0xc0022b7360) Stream added, broadcasting: 1 I0514 22:04:06.357441 6 log.go:172] (0xc0023afb80) Reply frame received for 1 I0514 22:04:06.357478 6 log.go:172] (0xc0023afb80) (0xc000dce780) Create stream I0514 22:04:06.357489 6 log.go:172] (0xc0023afb80) (0xc000dce780) Stream added, broadcasting: 3 I0514 22:04:06.358379 6 log.go:172] (0xc0023afb80) Reply frame received for 3 I0514 22:04:06.358416 6 log.go:172] (0xc0023afb80) (0xc002848c80) Create stream I0514 22:04:06.358429 6 log.go:172] (0xc0023afb80) (0xc002848c80) Stream added, broadcasting: 5 I0514 22:04:06.359366 6 log.go:172] (0xc0023afb80) Reply frame received for 5 I0514 22:04:06.436713 6 log.go:172] (0xc0023afb80) Data frame received for 5 I0514 22:04:06.436757 6 log.go:172] (0xc002848c80) (5) Data frame handling I0514 22:04:06.436787 6 log.go:172] (0xc0023afb80) Data frame received for 3 I0514 22:04:06.436801 6 log.go:172] (0xc000dce780) (3) Data frame handling I0514 22:04:06.436817 6 log.go:172] (0xc000dce780) (3) Data frame sent I0514 22:04:06.436830 6 log.go:172] (0xc0023afb80) Data frame received for 3 I0514 22:04:06.436842 6 log.go:172] (0xc000dce780) (3) Data frame handling I0514 22:04:06.438666 6 log.go:172] (0xc0023afb80) Data frame received for 1 I0514 22:04:06.438691 6 log.go:172] (0xc0022b7360) (1) Data frame handling I0514 22:04:06.438706 6 log.go:172] (0xc0022b7360) (1) Data frame sent I0514 22:04:06.438717 6 log.go:172] (0xc0023afb80) (0xc0022b7360) Stream removed, broadcasting: 1 I0514 22:04:06.438733 6 log.go:172] (0xc0023afb80) Go away received I0514 22:04:06.438931 6 log.go:172] (0xc0023afb80) (0xc0022b7360) Stream removed, broadcasting: 1 I0514 22:04:06.438961 6 log.go:172] (0xc0023afb80) (0xc000dce780) Stream removed, broadcasting: 3 I0514 22:04:06.438981 6 log.go:172] (0xc0023afb80) (0xc002848c80) Stream removed, broadcasting: 5 May 14 22:04:06.438: INFO: Exec stderr: "" May 14 22:04:06.439: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8397 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 22:04:06.439: INFO: >>> kubeConfig: /root/.kube/config I0514 22:04:06.478454 6 log.go:172] (0xc003156370) (0xc0022b7860) Create stream I0514 22:04:06.478481 6 log.go:172] (0xc003156370) (0xc0022b7860) Stream added, broadcasting: 1 I0514 22:04:06.480574 6 log.go:172] (0xc003156370) Reply frame received for 1 I0514 22:04:06.480617 6 log.go:172] (0xc003156370) (0xc000d3bf40) Create stream I0514 22:04:06.480631 6 log.go:172] (0xc003156370) (0xc000d3bf40) Stream added, broadcasting: 3 I0514 22:04:06.481598 6 log.go:172] (0xc003156370) Reply frame received for 3 I0514 22:04:06.481639 6 log.go:172] (0xc003156370) (0xc0010940a0) Create stream I0514 22:04:06.481660 6 log.go:172] (0xc003156370) (0xc0010940a0) Stream added, broadcasting: 5 I0514 22:04:06.482488 6 log.go:172] (0xc003156370) Reply frame received for 5 I0514 22:04:06.544745 6 log.go:172] (0xc003156370) Data frame received for 3 I0514 22:04:06.544787 6 log.go:172] (0xc000d3bf40) (3) Data frame handling I0514 22:04:06.544801 6 log.go:172] (0xc000d3bf40) (3) Data frame sent I0514 22:04:06.544813 6 log.go:172] (0xc003156370) Data frame received for 3 I0514 22:04:06.544820 6 log.go:172] (0xc000d3bf40) (3) Data frame handling I0514 22:04:06.544841 6 log.go:172] (0xc003156370) Data frame received for 5 I0514 22:04:06.544850 6 log.go:172] (0xc0010940a0) (5) Data frame handling I0514 22:04:06.546488 6 log.go:172] (0xc003156370) Data frame received for 1 I0514 22:04:06.546510 6 log.go:172] (0xc0022b7860) (1) Data frame handling I0514 22:04:06.546524 6 log.go:172] (0xc0022b7860) (1) Data frame sent I0514 22:04:06.546537 6 log.go:172] (0xc003156370) (0xc0022b7860) Stream removed, broadcasting: 1 I0514 22:04:06.546550 6 log.go:172] (0xc003156370) Go away received I0514 22:04:06.546673 6 log.go:172] (0xc003156370) (0xc0022b7860) Stream removed, broadcasting: 1 I0514 22:04:06.546695 6 log.go:172] (0xc003156370) (0xc000d3bf40) Stream removed, broadcasting: 3 I0514 22:04:06.546704 6 log.go:172] (0xc003156370) (0xc0010940a0) Stream removed, broadcasting: 5 May 14 22:04:06.546: INFO: Exec stderr: "" May 14 22:04:06.546: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8397 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 22:04:06.546: INFO: >>> kubeConfig: /root/.kube/config I0514 22:04:06.582985 6 log.go:172] (0xc002794d10) (0xc0012d1b80) Create stream I0514 22:04:06.583009 6 log.go:172] (0xc002794d10) (0xc0012d1b80) Stream added, broadcasting: 1 I0514 22:04:06.585235 6 log.go:172] (0xc002794d10) Reply frame received for 1 I0514 22:04:06.585261 6 log.go:172] (0xc002794d10) (0xc002848d20) Create stream I0514 22:04:06.585266 6 log.go:172] (0xc002794d10) (0xc002848d20) Stream added, broadcasting: 3 I0514 22:04:06.586363 6 log.go:172] (0xc002794d10) Reply frame received for 3 I0514 22:04:06.586397 6 log.go:172] (0xc002794d10) (0xc0022b7a40) Create stream I0514 22:04:06.586412 6 log.go:172] (0xc002794d10) (0xc0022b7a40) Stream added, broadcasting: 5 I0514 22:04:06.587630 6 log.go:172] (0xc002794d10) Reply frame received for 5 I0514 22:04:06.666898 6 log.go:172] (0xc002794d10) Data frame received for 5 I0514 22:04:06.666927 6 log.go:172] (0xc0022b7a40) (5) Data frame handling I0514 22:04:06.666956 6 log.go:172] (0xc002794d10) Data frame received for 3 I0514 22:04:06.666968 6 log.go:172] (0xc002848d20) (3) Data frame handling I0514 22:04:06.666983 6 log.go:172] (0xc002848d20) (3) Data frame sent I0514 22:04:06.666992 6 log.go:172] (0xc002794d10) Data frame received for 3 I0514 22:04:06.667000 6 log.go:172] (0xc002848d20) (3) Data frame handling I0514 22:04:06.668419 6 log.go:172] (0xc002794d10) Data frame received for 1 I0514 22:04:06.668437 6 log.go:172] (0xc0012d1b80) (1) Data frame handling I0514 22:04:06.668453 6 log.go:172] (0xc0012d1b80) (1) Data frame sent I0514 22:04:06.668465 6 log.go:172] (0xc002794d10) (0xc0012d1b80) Stream removed, broadcasting: 1 I0514 22:04:06.668479 6 log.go:172] (0xc002794d10) Go away received I0514 22:04:06.668639 6 log.go:172] (0xc002794d10) (0xc0012d1b80) Stream removed, broadcasting: 1 I0514 22:04:06.668661 6 log.go:172] (0xc002794d10) (0xc002848d20) Stream removed, broadcasting: 3 I0514 22:04:06.668675 6 log.go:172] (0xc002794d10) (0xc0022b7a40) Stream removed, broadcasting: 5 May 14 22:04:06.668: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:04:06.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8397" for this suite. • [SLOW TEST:15.411 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2714,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:04:06.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 14 22:04:06.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3724' May 14 22:04:06.867: INFO: stderr: "" May 14 22:04:06.867: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 14 22:04:11.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3724 -o json' May 14 22:04:12.010: INFO: stderr: "" May 14 22:04:12.010: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-14T22:04:06Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3724\",\n \"resourceVersion\": \"16217932\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3724/pods/e2e-test-httpd-pod\",\n \"uid\": \"b808fdaf-0bcb-4886-8aaf-6a821834be94\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-46wvl\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-46wvl\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-46wvl\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-14T22:04:06Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-14T22:04:09Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-14T22:04:09Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-14T22:04:06Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://786a84ccc80431977f9004a4ca5216698d015ca76ebf7594a8a0accefaa271d1\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-14T22:04:09Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.231\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.231\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-14T22:04:06Z\"\n }\n}\n" STEP: replace the image in the pod May 14 22:04:12.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3724' May 14 22:04:12.245: INFO: stderr: "" May 14 22:04:12.245: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 14 22:04:12.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3724' May 14 22:04:19.508: INFO: stderr: "" May 14 22:04:19.508: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:04:19.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3724" for this suite. • [SLOW TEST:12.852 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":164,"skipped":2715,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:04:19.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0514 22:05:00.131420 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 22:05:00.131: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:05:00.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7090" for this suite. • [SLOW TEST:40.602 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":165,"skipped":2739,"failed":0} SS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:05:00.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 14 22:05:06.934: INFO: Successfully updated pod "adopt-release-df6lc" STEP: Checking that the Job readopts the Pod May 14 22:05:06.934: INFO: Waiting up to 15m0s for pod "adopt-release-df6lc" in namespace "job-8339" to be "adopted" May 14 22:05:07.211: INFO: Pod "adopt-release-df6lc": Phase="Running", Reason="", readiness=true. Elapsed: 276.282702ms May 14 22:05:09.214: INFO: Pod "adopt-release-df6lc": Phase="Running", Reason="", readiness=true. Elapsed: 2.280085419s May 14 22:05:09.214: INFO: Pod "adopt-release-df6lc" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 14 22:05:09.722: INFO: Successfully updated pod "adopt-release-df6lc" STEP: Checking that the Job releases the Pod May 14 22:05:09.722: INFO: Waiting up to 15m0s for pod "adopt-release-df6lc" in namespace "job-8339" to be "released" May 14 22:05:10.040: INFO: Pod "adopt-release-df6lc": Phase="Running", Reason="", readiness=true. Elapsed: 317.989386ms May 14 22:05:12.043: INFO: Pod "adopt-release-df6lc": Phase="Running", Reason="", readiness=true. Elapsed: 2.321004637s May 14 22:05:12.043: INFO: Pod "adopt-release-df6lc" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:05:12.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8339" for this suite. • [SLOW TEST:11.910 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":166,"skipped":2741,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:05:12.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 14 22:05:12.508: INFO: Waiting up to 5m0s for pod "pod-724f1544-9cc3-4408-bbfb-b4e1ab01a5d3" in namespace "emptydir-2052" to be "success or failure" May 14 22:05:12.533: INFO: Pod "pod-724f1544-9cc3-4408-bbfb-b4e1ab01a5d3": Phase="Pending", Reason="", readiness=false. Elapsed: 24.908683ms May 14 22:05:14.537: INFO: Pod "pod-724f1544-9cc3-4408-bbfb-b4e1ab01a5d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029214996s May 14 22:05:16.542: INFO: Pod "pod-724f1544-9cc3-4408-bbfb-b4e1ab01a5d3": Phase="Running", Reason="", readiness=true. Elapsed: 4.033625328s May 14 22:05:18.546: INFO: Pod "pod-724f1544-9cc3-4408-bbfb-b4e1ab01a5d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038116412s STEP: Saw pod success May 14 22:05:18.546: INFO: Pod "pod-724f1544-9cc3-4408-bbfb-b4e1ab01a5d3" satisfied condition "success or failure" May 14 22:05:18.550: INFO: Trying to get logs from node jerma-worker pod pod-724f1544-9cc3-4408-bbfb-b4e1ab01a5d3 container test-container: STEP: delete the pod May 14 22:05:18.583: INFO: Waiting for pod pod-724f1544-9cc3-4408-bbfb-b4e1ab01a5d3 to disappear May 14 22:05:18.588: INFO: Pod pod-724f1544-9cc3-4408-bbfb-b4e1ab01a5d3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:05:18.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2052" for this suite. • [SLOW TEST:6.570 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2753,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:05:18.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8051.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8051.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8051.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8051.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8051.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8051.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 22:05:24.745: INFO: DNS probes using dns-8051/dns-test-7c513331-cab1-4f73-acc5-c12622d3aa73 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:05:24.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8051" for this suite. • [SLOW TEST:6.279 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":168,"skipped":2780,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:05:24.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 14 22:05:25.639: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91eb312b-289b-4861-921d-62b241d9aa2a" in namespace "projected-130" to be "success or failure" May 14 22:05:25.642: INFO: Pod "downwardapi-volume-91eb312b-289b-4861-921d-62b241d9aa2a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.46605ms May 14 22:05:27.784: INFO: Pod "downwardapi-volume-91eb312b-289b-4861-921d-62b241d9aa2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145091327s May 14 22:05:29.789: INFO: Pod "downwardapi-volume-91eb312b-289b-4861-921d-62b241d9aa2a": Phase="Running", Reason="", readiness=true. Elapsed: 4.150053274s May 14 22:05:31.794: INFO: Pod "downwardapi-volume-91eb312b-289b-4861-921d-62b241d9aa2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.15504139s STEP: Saw pod success May 14 22:05:31.794: INFO: Pod "downwardapi-volume-91eb312b-289b-4861-921d-62b241d9aa2a" satisfied condition "success or failure" May 14 22:05:31.797: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-91eb312b-289b-4861-921d-62b241d9aa2a container client-container: STEP: delete the pod May 14 22:05:31.842: INFO: Waiting for pod downwardapi-volume-91eb312b-289b-4861-921d-62b241d9aa2a to disappear May 14 22:05:31.968: INFO: Pod downwardapi-volume-91eb312b-289b-4861-921d-62b241d9aa2a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:05:31.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-130" for this suite. • [SLOW TEST:7.137 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2796,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:05:32.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 22:05:32.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 14 22:05:32.732: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-14T22:05:32Z generation:1 name:name1 resourceVersion:16218562 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:388fa870-e87c-4c80-8f14-bc247db98030] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 14 22:05:42.741: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-14T22:05:42Z generation:1 name:name2 resourceVersion:16218598 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:f45f78df-5023-478b-9f1d-c56c3794fcdb] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 14 22:05:52.759: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-14T22:05:32Z generation:2 name:name1 resourceVersion:16218632 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:388fa870-e87c-4c80-8f14-bc247db98030] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 14 22:06:02.765: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-14T22:05:42Z generation:2 name:name2 resourceVersion:16218669 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:f45f78df-5023-478b-9f1d-c56c3794fcdb] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 14 22:06:12.774: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-14T22:05:32Z generation:2 name:name1 resourceVersion:16218700 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:388fa870-e87c-4c80-8f14-bc247db98030] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 14 22:06:22.782: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-14T22:05:42Z generation:2 name:name2 resourceVersion:16218730 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:f45f78df-5023-478b-9f1d-c56c3794fcdb] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:06:33.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8719" for this suite. • [SLOW TEST:61.276 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":170,"skipped":2800,"failed":0} SS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:06:33.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 22:06:33.429: INFO: Creating deployment "test-recreate-deployment" May 14 22:06:33.447: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 14 22:06:33.543: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 14 22:06:36.138: INFO: Waiting deployment "test-recreate-deployment" to complete May 14 22:06:36.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725090793, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725090793, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725090793, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725090793, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 22:06:38.188: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 14 22:06:38.221: INFO: Updating deployment test-recreate-deployment May 14 22:06:38.221: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 14 22:06:39.011: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3780 /apis/apps/v1/namespaces/deployment-3780/deployments/test-recreate-deployment f7350421-5053-4b56-aa88-41bf4eb5104a 16218826 2 2020-05-14 22:06:33 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00562f7b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-14 22:06:38 +0000 UTC,LastTransitionTime:2020-05-14 22:06:38 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-14 22:06:38 +0000 UTC,LastTransitionTime:2020-05-14 22:06:33 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 14 22:06:39.014: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-3780 /apis/apps/v1/namespaces/deployment-3780/replicasets/test-recreate-deployment-5f94c574ff 01cda603-804e-4178-b4f7-a0b5a65111b5 16218823 1 2020-05-14 22:06:38 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment f7350421-5053-4b56-aa88-41bf4eb5104a 0xc00562fb47 0xc00562fb48}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00562fba8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 14 22:06:39.014: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 14 22:06:39.014: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-3780 /apis/apps/v1/namespaces/deployment-3780/replicasets/test-recreate-deployment-799c574856 8199c358-0936-464a-8105-e7ada0df91e8 16218813 2 2020-05-14 22:06:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment f7350421-5053-4b56-aa88-41bf4eb5104a 0xc00562fc17 0xc00562fc18}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00562fc88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 14 22:06:39.018: INFO: Pod "test-recreate-deployment-5f94c574ff-769xm" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-769xm test-recreate-deployment-5f94c574ff- deployment-3780 /api/v1/namespaces/deployment-3780/pods/test-recreate-deployment-5f94c574ff-769xm f84787d5-3ec2-406b-b9fd-b0006471f68c 16218827 0 2020-05-14 22:06:38 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 01cda603-804e-4178-b4f7-a0b5a65111b5 0xc005577bf7 0xc005577bf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4hfpf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4hfpf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4hfpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:06:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:06:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:06:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-14 22:06:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:06:39.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3780" for this suite. • [SLOW TEST:5.710 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":171,"skipped":2802,"failed":0} S ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:06:39.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:06:45.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2841" for this suite. • [SLOW TEST:6.313 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2803,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:06:45.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 14 22:06:45.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9024' May 14 22:06:45.665: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 14 22:06:45.665: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 14 22:06:45.728: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-5pm22] May 14 22:06:45.728: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-5pm22" in namespace "kubectl-9024" to be "running and ready" May 14 22:06:45.734: INFO: Pod "e2e-test-httpd-rc-5pm22": Phase="Pending", Reason="", readiness=false. Elapsed: 5.282074ms May 14 22:06:47.738: INFO: Pod "e2e-test-httpd-rc-5pm22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009367795s May 14 22:06:49.742: INFO: Pod "e2e-test-httpd-rc-5pm22": Phase="Running", Reason="", readiness=true. Elapsed: 4.013681537s May 14 22:06:49.742: INFO: Pod "e2e-test-httpd-rc-5pm22" satisfied condition "running and ready" May 14 22:06:49.742: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-5pm22] May 14 22:06:49.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-9024' May 14 22:06:49.886: INFO: stderr: "" May 14 22:06:49.886: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.240. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.240. Set the 'ServerName' directive globally to suppress this message\n[Thu May 14 22:06:48.195827 2020] [mpm_event:notice] [pid 1:tid 140163609553768] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu May 14 22:06:48.195881 2020] [core:notice] [pid 1:tid 140163609553768] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 14 22:06:49.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9024' May 14 22:06:49.984: INFO: stderr: "" May 14 22:06:49.984: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:06:49.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9024" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":173,"skipped":2846,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:06:49.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-2de52fae-f6cb-49df-9421-a4dbe026baea May 14 22:06:50.189: INFO: Pod name my-hostname-basic-2de52fae-f6cb-49df-9421-a4dbe026baea: Found 0 pods out of 1 May 14 22:06:55.211: INFO: Pod name my-hostname-basic-2de52fae-f6cb-49df-9421-a4dbe026baea: Found 1 pods out of 1 May 14 22:06:55.211: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2de52fae-f6cb-49df-9421-a4dbe026baea" are running May 14 22:06:55.214: INFO: Pod "my-hostname-basic-2de52fae-f6cb-49df-9421-a4dbe026baea-kg4z8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 22:06:50 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 22:06:54 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 22:06:54 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 22:06:50 +0000 UTC Reason: Message:}]) May 14 22:06:55.214: INFO: Trying to dial the pod May 14 22:07:00.225: INFO: Controller my-hostname-basic-2de52fae-f6cb-49df-9421-a4dbe026baea: Got expected result from replica 1 [my-hostname-basic-2de52fae-f6cb-49df-9421-a4dbe026baea-kg4z8]: "my-hostname-basic-2de52fae-f6cb-49df-9421-a4dbe026baea-kg4z8", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:07:00.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2505" for this suite. • [SLOW TEST:10.244 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":174,"skipped":2855,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:07:00.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:07:16.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8474" for this suite. • [SLOW TEST:16.225 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":175,"skipped":2857,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:07:16.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:07:47.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4433" for this suite. STEP: Destroying namespace "nsdeletetest-4058" for this suite. May 14 22:07:47.781: INFO: Namespace nsdeletetest-4058 was already deleted STEP: Destroying namespace "nsdeletetest-4265" for this suite. • [SLOW TEST:31.322 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":176,"skipped":2877,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:07:47.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:07:47.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-534" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":177,"skipped":2984,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:07:47.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:07:52.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5910" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2991,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:07:52.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-cc0b4317-32eb-4349-ad02-24bc63c866d2 STEP: Creating a pod to test consume configMaps May 14 22:07:52.168: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-da0cad33-b1fb-4a64-8a6d-bc1b871d3bb9" in namespace "projected-8976" to be "success or failure" May 14 22:07:52.195: INFO: Pod "pod-projected-configmaps-da0cad33-b1fb-4a64-8a6d-bc1b871d3bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 26.951724ms May 14 22:07:54.245: INFO: Pod "pod-projected-configmaps-da0cad33-b1fb-4a64-8a6d-bc1b871d3bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077848895s May 14 22:07:56.264: INFO: Pod "pod-projected-configmaps-da0cad33-b1fb-4a64-8a6d-bc1b871d3bb9": Phase="Running", Reason="", readiness=true. Elapsed: 4.095943971s May 14 22:07:58.300: INFO: Pod "pod-projected-configmaps-da0cad33-b1fb-4a64-8a6d-bc1b871d3bb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.132396403s STEP: Saw pod success May 14 22:07:58.300: INFO: Pod "pod-projected-configmaps-da0cad33-b1fb-4a64-8a6d-bc1b871d3bb9" satisfied condition "success or failure" May 14 22:07:58.303: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-da0cad33-b1fb-4a64-8a6d-bc1b871d3bb9 container projected-configmap-volume-test: STEP: delete the pod May 14 22:07:58.366: INFO: Waiting for pod pod-projected-configmaps-da0cad33-b1fb-4a64-8a6d-bc1b871d3bb9 to disappear May 14 22:07:58.479: INFO: Pod pod-projected-configmaps-da0cad33-b1fb-4a64-8a6d-bc1b871d3bb9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:07:58.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8976" for this suite. • [SLOW TEST:6.435 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2997,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:07:58.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 14 22:08:02.713: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:08:02.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2550" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":3014,"failed":0} SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:08:02.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-3f06961d-76ee-4e6a-a5d8-b00066dd1551 STEP: Creating secret with name secret-projected-all-test-volume-79d891ad-f064-416a-847a-4583215b36dd STEP: Creating a pod to test Check all projections for projected volume plugin May 14 22:08:03.036: INFO: Waiting up to 5m0s for pod "projected-volume-c6f28e65-1432-41ed-a52a-4bd16741f66a" in namespace "projected-570" to be "success or failure" May 14 22:08:03.042: INFO: Pod "projected-volume-c6f28e65-1432-41ed-a52a-4bd16741f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.24857ms May 14 22:08:05.046: INFO: Pod "projected-volume-c6f28e65-1432-41ed-a52a-4bd16741f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00970295s May 14 22:08:07.050: INFO: Pod "projected-volume-c6f28e65-1432-41ed-a52a-4bd16741f66a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013841294s STEP: Saw pod success May 14 22:08:07.050: INFO: Pod "projected-volume-c6f28e65-1432-41ed-a52a-4bd16741f66a" satisfied condition "success or failure" May 14 22:08:07.053: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-c6f28e65-1432-41ed-a52a-4bd16741f66a container projected-all-volume-test: STEP: delete the pod May 14 22:08:07.146: INFO: Waiting for pod projected-volume-c6f28e65-1432-41ed-a52a-4bd16741f66a to disappear May 14 22:08:07.202: INFO: Pod projected-volume-c6f28e65-1432-41ed-a52a-4bd16741f66a no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:08:07.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-570" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":181,"skipped":3017,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:08:07.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-fc55cda2-1007-4913-ae45-416a836988f1 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:08:07.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7331" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":182,"skipped":3038,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:08:07.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 22:08:07.770: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 14 22:08:12.774: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 14 22:08:12.774: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 14 22:08:12.840: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5657 /apis/apps/v1/namespaces/deployment-5657/deployments/test-cleanup-deployment 90876708-6e63-4752-bd37-7ad31dee0e8b 16219417 1 2020-05-14 22:08:12 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00562f958 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 14 22:08:12.844: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-5657 /apis/apps/v1/namespaces/deployment-5657/replicasets/test-cleanup-deployment-55ffc6b7b6 52194802-e863-44d6-8a38-6e0f0173ecf7 16219419 1 2020-05-14 22:08:12 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 90876708-6e63-4752-bd37-7ad31dee0e8b 0xc00562fd67 0xc00562fd68}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00562fdd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 14 22:08:12.844: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 14 22:08:12.844: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-5657 /apis/apps/v1/namespaces/deployment-5657/replicasets/test-cleanup-controller 9a85fdfc-9e37-44dd-9c12-e8f8935c9e71 16219418 1 2020-05-14 22:08:07 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 90876708-6e63-4752-bd37-7ad31dee0e8b 0xc00562fc7f 0xc00562fc90}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00562fcf8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 14 22:08:12.897: INFO: Pod "test-cleanup-controller-tp8tm" is available: &Pod{ObjectMeta:{test-cleanup-controller-tp8tm test-cleanup-controller- deployment-5657 /api/v1/namespaces/deployment-5657/pods/test-cleanup-controller-tp8tm 137a099d-584f-4f3b-95be-c3eb5fbcab37 16219395 0 2020-05-14 22:08:07 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 9a85fdfc-9e37-44dd-9c12-e8f8935c9e71 0xc00522c6a7 0xc00522c6a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hnjk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hnjk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hnjk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:08:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:08:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:08:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:08:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.244,StartTime:2020-05-14 22:08:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 22:08:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://028fe8edd27dc2b3bd336f69e53b3273f855c596efc1e4a317a73939478c3933,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.244,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 22:08:12.897: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-cchqr" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-cchqr test-cleanup-deployment-55ffc6b7b6- deployment-5657 /api/v1/namespaces/deployment-5657/pods/test-cleanup-deployment-55ffc6b7b6-cchqr 0e1c5bb7-1812-481c-b9aa-7f0bc73b8534 16219425 0 2020-05-14 22:08:12 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 52194802-e863-44d6-8a38-6e0f0173ecf7 0xc00522c837 0xc00522c838}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hnjk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hnjk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hnjk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:08:12.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5657" for this suite. • [SLOW TEST:5.274 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":183,"skipped":3060,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:08:12.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-709 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-709 May 14 22:08:13.072: INFO: Found 0 stateful pods, waiting for 1 May 14 22:08:23.077: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 14 22:08:23.101: INFO: Deleting all statefulset in ns statefulset-709 May 14 22:08:23.107: INFO: Scaling statefulset ss to 0 May 14 22:08:43.236: INFO: Waiting for statefulset status.replicas updated to 0 May 14 22:08:43.238: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:08:43.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-709" for this suite. • [SLOW TEST:30.351 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":184,"skipped":3063,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:08:43.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 22:09:05.398: INFO: Container started at 2020-05-14 22:08:45 +0000 UTC, pod became ready at 2020-05-14 22:09:04 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:09:05.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7806" for this suite. • [SLOW TEST:22.138 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3071,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:09:05.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8845 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-8845 STEP: Creating statefulset with conflicting port in namespace statefulset-8845 STEP: Waiting until pod test-pod will start running in namespace statefulset-8845 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8845 May 14 22:09:11.583: INFO: Observed stateful pod in namespace: statefulset-8845, name: ss-0, uid: 67835050-22db-4a93-9ab4-07e5ed2d53cc, status phase: Pending. Waiting for statefulset controller to delete. May 14 22:09:12.702: INFO: Observed stateful pod in namespace: statefulset-8845, name: ss-0, uid: 67835050-22db-4a93-9ab4-07e5ed2d53cc, status phase: Failed. Waiting for statefulset controller to delete. May 14 22:09:12.771: INFO: Observed stateful pod in namespace: statefulset-8845, name: ss-0, uid: 67835050-22db-4a93-9ab4-07e5ed2d53cc, status phase: Failed. Waiting for statefulset controller to delete. May 14 22:09:12.851: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8845 STEP: Removing pod with conflicting port in namespace statefulset-8845 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8845 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 14 22:09:19.194: INFO: Deleting all statefulset in ns statefulset-8845 May 14 22:09:19.197: INFO: Scaling statefulset ss to 0 May 14 22:09:39.223: INFO: Waiting for statefulset status.replicas updated to 0 May 14 22:09:39.225: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:09:39.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8845" for this suite. • [SLOW TEST:33.836 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":186,"skipped":3083,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:09:39.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 22:09:39.871: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 22:09:41.894: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725090979, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725090979, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725090979, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725090979, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 22:09:43.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725090979, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725090979, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725090979, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725090979, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 22:09:46.979: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 22:09:46.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7343-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:09:48.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4873" for this suite. STEP: Destroying namespace "webhook-4873-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.103 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":187,"skipped":3093,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:09:48.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:10:04.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9415" for this suite. • [SLOW TEST:16.308 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":188,"skipped":3167,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:10:04.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-652.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-652.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-652.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-652.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-652.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-652.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-652.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-652.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-652.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-652.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-652.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 90.112.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.112.90_udp@PTR;check="$$(dig +tcp +noall +answer +search 90.112.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.112.90_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-652.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-652.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-652.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-652.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-652.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-652.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-652.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-652.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-652.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-652.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-652.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 90.112.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.112.90_udp@PTR;check="$$(dig +tcp +noall +answer +search 90.112.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.112.90_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 22:10:12.902: INFO: Unable to read wheezy_udp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:12.905: INFO: Unable to read wheezy_tcp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:12.907: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:12.909: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:12.923: INFO: Unable to read jessie_udp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:12.924: INFO: Unable to read jessie_tcp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:12.926: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:12.928: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:12.941: INFO: Lookups using dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443 failed for: [wheezy_udp@dns-test-service.dns-652.svc.cluster.local wheezy_tcp@dns-test-service.dns-652.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local jessie_udp@dns-test-service.dns-652.svc.cluster.local jessie_tcp@dns-test-service.dns-652.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local] May 14 22:10:17.945: INFO: Unable to read wheezy_udp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:17.948: INFO: Unable to read wheezy_tcp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:17.951: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:17.954: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:17.974: INFO: Unable to read jessie_udp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:17.976: INFO: Unable to read jessie_tcp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:17.979: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:17.981: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:17.996: INFO: Lookups using dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443 failed for: [wheezy_udp@dns-test-service.dns-652.svc.cluster.local wheezy_tcp@dns-test-service.dns-652.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local jessie_udp@dns-test-service.dns-652.svc.cluster.local jessie_tcp@dns-test-service.dns-652.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local] May 14 22:10:22.967: INFO: Unable to read wheezy_udp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:22.990: INFO: Unable to read wheezy_tcp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:22.993: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:22.996: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:23.124: INFO: Unable to read jessie_udp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:23.127: INFO: Unable to read jessie_tcp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:23.130: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:23.132: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:23.148: INFO: Lookups using dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443 failed for: [wheezy_udp@dns-test-service.dns-652.svc.cluster.local wheezy_tcp@dns-test-service.dns-652.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local jessie_udp@dns-test-service.dns-652.svc.cluster.local jessie_tcp@dns-test-service.dns-652.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local] May 14 22:10:27.945: INFO: Unable to read wheezy_udp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:27.951: INFO: Unable to read wheezy_tcp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:27.955: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:27.958: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:27.977: INFO: Unable to read jessie_udp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:27.979: INFO: Unable to read jessie_tcp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:27.982: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:27.984: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:27.997: INFO: Lookups using dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443 failed for: [wheezy_udp@dns-test-service.dns-652.svc.cluster.local wheezy_tcp@dns-test-service.dns-652.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local jessie_udp@dns-test-service.dns-652.svc.cluster.local jessie_tcp@dns-test-service.dns-652.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local] May 14 22:10:32.947: INFO: Unable to read wheezy_udp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:32.951: INFO: Unable to read wheezy_tcp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:32.954: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:32.958: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:32.982: INFO: Unable to read jessie_udp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:32.984: INFO: Unable to read jessie_tcp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:32.987: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:32.989: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:33.003: INFO: Lookups using dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443 failed for: [wheezy_udp@dns-test-service.dns-652.svc.cluster.local wheezy_tcp@dns-test-service.dns-652.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local jessie_udp@dns-test-service.dns-652.svc.cluster.local jessie_tcp@dns-test-service.dns-652.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local] May 14 22:10:37.945: INFO: Unable to read wheezy_udp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:37.948: INFO: Unable to read wheezy_tcp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:37.950: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:37.952: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:37.967: INFO: Unable to read jessie_udp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:37.969: INFO: Unable to read jessie_tcp@dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:37.971: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:37.973: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local from pod dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443: the server could not find the requested resource (get pods dns-test-91d8bd5f-468f-4732-b617-beae739ce443) May 14 22:10:37.985: INFO: Lookups using dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443 failed for: [wheezy_udp@dns-test-service.dns-652.svc.cluster.local wheezy_tcp@dns-test-service.dns-652.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local jessie_udp@dns-test-service.dns-652.svc.cluster.local jessie_tcp@dns-test-service.dns-652.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-652.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-652.svc.cluster.local] May 14 22:10:43.004: INFO: DNS probes using dns-652/dns-test-91d8bd5f-468f-4732-b617-beae739ce443 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:10:43.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-652" for this suite. • [SLOW TEST:39.148 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":189,"skipped":3171,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:10:43.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 22:10:45.028: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 22:10:47.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091045, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091045, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091045, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091045, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 22:10:50.078: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:11:02.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9576" for this suite. STEP: Destroying namespace "webhook-9576-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.641 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":190,"skipped":3195,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:11:02.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 14 22:11:02.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-5212 -- logs-generator --log-lines-total 100 --run-duration 20s' May 14 22:11:06.920: INFO: stderr: "" May 14 22:11:06.920: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 14 22:11:06.920: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 14 22:11:06.920: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5212" to be "running and ready, or succeeded" May 14 22:11:06.955: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 35.56452ms May 14 22:11:08.960: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039840671s May 14 22:11:10.963: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.042852243s May 14 22:11:10.963: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 14 22:11:10.963: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 14 22:11:10.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5212' May 14 22:11:11.083: INFO: stderr: "" May 14 22:11:11.083: INFO: stdout: "I0514 22:11:09.526644 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/gdfz 563\nI0514 22:11:09.726756 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/mmx 404\nI0514 22:11:09.926807 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/kqz 574\nI0514 22:11:10.126798 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/qfv 269\nI0514 22:11:10.326776 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/xh84 262\nI0514 22:11:10.526784 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/dlwc 456\nI0514 22:11:10.726803 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/bx2m 445\nI0514 22:11:10.926798 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/8l4r 261\n" STEP: limiting log lines May 14 22:11:11.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5212 --tail=1' May 14 22:11:11.194: INFO: stderr: "" May 14 22:11:11.194: INFO: stdout: "I0514 22:11:11.126784 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/h6d 371\n" May 14 22:11:11.194: INFO: got output "I0514 22:11:11.126784 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/h6d 371\n" STEP: limiting log bytes May 14 22:11:11.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5212 --limit-bytes=1' May 14 22:11:11.303: INFO: stderr: "" May 14 22:11:11.303: INFO: stdout: "I" May 14 22:11:11.303: INFO: got output "I" STEP: exposing timestamps May 14 22:11:11.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5212 --tail=1 --timestamps' May 14 22:11:11.399: INFO: stderr: "" May 14 22:11:11.399: INFO: stdout: "2020-05-14T22:11:11.32690427Z I0514 22:11:11.326760 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/g2c 502\n" May 14 22:11:11.399: INFO: got output "2020-05-14T22:11:11.32690427Z I0514 22:11:11.326760 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/g2c 502\n" STEP: restricting to a time range May 14 22:11:13.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5212 --since=1s' May 14 22:11:14.010: INFO: stderr: "" May 14 22:11:14.010: INFO: stdout: "I0514 22:11:13.126765 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/h7h5 560\nI0514 22:11:13.326782 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/tk7n 544\nI0514 22:11:13.526765 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/hclw 317\nI0514 22:11:13.726765 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/7xt 315\nI0514 22:11:13.926799 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/22vc 313\n" May 14 22:11:14.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5212 --since=24h' May 14 22:11:14.108: INFO: stderr: "" May 14 22:11:14.108: INFO: stdout: "I0514 22:11:09.526644 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/gdfz 563\nI0514 22:11:09.726756 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/mmx 404\nI0514 22:11:09.926807 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/kqz 574\nI0514 22:11:10.126798 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/qfv 269\nI0514 22:11:10.326776 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/xh84 262\nI0514 22:11:10.526784 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/dlwc 456\nI0514 22:11:10.726803 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/bx2m 445\nI0514 22:11:10.926798 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/8l4r 261\nI0514 22:11:11.126784 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/h6d 371\nI0514 22:11:11.326760 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/g2c 502\nI0514 22:11:11.526782 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/2bl4 211\nI0514 22:11:11.726801 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/8kd 247\nI0514 22:11:11.926793 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/bhg 306\nI0514 22:11:12.126744 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/mgm 401\nI0514 22:11:12.326799 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/bd4 386\nI0514 22:11:12.526788 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/nd24 240\nI0514 22:11:12.726779 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/pwn 441\nI0514 22:11:12.926806 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/fvg 205\nI0514 22:11:13.126765 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/h7h5 560\nI0514 22:11:13.326782 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/tk7n 544\nI0514 22:11:13.526765 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/hclw 317\nI0514 22:11:13.726765 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/7xt 315\nI0514 22:11:13.926799 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/22vc 313\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 14 22:11:14.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5212' May 14 22:11:19.525: INFO: stderr: "" May 14 22:11:19.525: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:11:19.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5212" for this suite. • [SLOW TEST:17.094 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":191,"skipped":3222,"failed":0} [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:11:19.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 22:11:19.603: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-42e8d663-526c-40ea-8639-f6bd06535d66" in namespace "security-context-test-2317" to be "success or failure" May 14 22:11:19.638: INFO: Pod "busybox-readonly-false-42e8d663-526c-40ea-8639-f6bd06535d66": Phase="Pending", Reason="", readiness=false. Elapsed: 34.8823ms May 14 22:11:21.654: INFO: Pod "busybox-readonly-false-42e8d663-526c-40ea-8639-f6bd06535d66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050839063s May 14 22:11:23.659: INFO: Pod "busybox-readonly-false-42e8d663-526c-40ea-8639-f6bd06535d66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055861256s May 14 22:11:23.659: INFO: Pod "busybox-readonly-false-42e8d663-526c-40ea-8639-f6bd06535d66" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:11:23.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2317" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3222,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:11:23.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-bc1c999c-3fa8-48a2-be74-08b3d0a2af20 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-bc1c999c-3fa8-48a2-be74-08b3d0a2af20 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:11:30.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1133" for this suite. • [SLOW TEST:6.365 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3262,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:11:30.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 22:11:30.185: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 14 22:11:32.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2567 create -f -' May 14 22:11:35.306: INFO: stderr: "" May 14 22:11:35.306: INFO: stdout: "e2e-test-crd-publish-openapi-8687-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 14 22:11:35.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2567 delete e2e-test-crd-publish-openapi-8687-crds test-foo' May 14 22:11:35.590: INFO: stderr: "" May 14 22:11:35.590: INFO: stdout: "e2e-test-crd-publish-openapi-8687-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 14 22:11:35.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2567 apply -f -' May 14 22:11:36.276: INFO: stderr: "" May 14 22:11:36.276: INFO: stdout: "e2e-test-crd-publish-openapi-8687-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 14 22:11:36.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2567 delete e2e-test-crd-publish-openapi-8687-crds test-foo' May 14 22:11:36.483: INFO: stderr: "" May 14 22:11:36.483: INFO: stdout: "e2e-test-crd-publish-openapi-8687-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 14 22:11:36.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2567 create -f -' May 14 22:11:37.039: INFO: rc: 1 May 14 22:11:37.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2567 apply -f -' May 14 22:11:37.285: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 14 22:11:37.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2567 create -f -' May 14 22:11:37.545: INFO: rc: 1 May 14 22:11:37.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2567 apply -f -' May 14 22:11:37.792: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 14 22:11:37.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8687-crds' May 14 22:11:38.032: INFO: stderr: "" May 14 22:11:38.032: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8687-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 14 22:11:38.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8687-crds.metadata' May 14 22:11:38.375: INFO: stderr: "" May 14 22:11:38.375: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8687-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 14 22:11:38.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8687-crds.spec' May 14 22:11:38.648: INFO: stderr: "" May 14 22:11:38.648: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8687-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 14 22:11:38.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8687-crds.spec.bars' May 14 22:11:38.896: INFO: stderr: "" May 14 22:11:38.896: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8687-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 14 22:11:38.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8687-crds.spec.bars2' May 14 22:11:39.227: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:11:42.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2567" for this suite. • [SLOW TEST:12.061 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":194,"skipped":3304,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:11:42.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-078cfa81-c59f-4331-8749-a8ffbe5c90cc STEP: Creating a pod to test consume secrets May 14 22:11:42.372: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1adc94f3-6586-49c2-aea0-4662e2ab3c2e" in namespace "projected-8236" to be "success or failure" May 14 22:11:42.387: INFO: Pod "pod-projected-secrets-1adc94f3-6586-49c2-aea0-4662e2ab3c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.553779ms May 14 22:11:44.460: INFO: Pod "pod-projected-secrets-1adc94f3-6586-49c2-aea0-4662e2ab3c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088114973s May 14 22:11:46.465: INFO: Pod "pod-projected-secrets-1adc94f3-6586-49c2-aea0-4662e2ab3c2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09283501s STEP: Saw pod success May 14 22:11:46.465: INFO: Pod "pod-projected-secrets-1adc94f3-6586-49c2-aea0-4662e2ab3c2e" satisfied condition "success or failure" May 14 22:11:46.468: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-1adc94f3-6586-49c2-aea0-4662e2ab3c2e container projected-secret-volume-test: STEP: delete the pod May 14 22:11:46.558: INFO: Waiting for pod pod-projected-secrets-1adc94f3-6586-49c2-aea0-4662e2ab3c2e to disappear May 14 22:11:46.594: INFO: Pod pod-projected-secrets-1adc94f3-6586-49c2-aea0-4662e2ab3c2e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:11:46.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8236" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3314,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:11:46.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-476de618-aa37-4c13-b1e6-3cda7ca3f138 STEP: Creating a pod to test consume secrets May 14 22:11:46.711: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3cd66b54-5bfb-497b-8887-7e6fd4bec83c" in namespace "projected-5399" to be "success or failure" May 14 22:11:46.757: INFO: Pod "pod-projected-secrets-3cd66b54-5bfb-497b-8887-7e6fd4bec83c": Phase="Pending", Reason="", readiness=false. Elapsed: 46.249061ms May 14 22:11:48.761: INFO: Pod "pod-projected-secrets-3cd66b54-5bfb-497b-8887-7e6fd4bec83c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050443595s May 14 22:11:50.848: INFO: Pod "pod-projected-secrets-3cd66b54-5bfb-497b-8887-7e6fd4bec83c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.136865614s STEP: Saw pod success May 14 22:11:50.848: INFO: Pod "pod-projected-secrets-3cd66b54-5bfb-497b-8887-7e6fd4bec83c" satisfied condition "success or failure" May 14 22:11:50.945: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-3cd66b54-5bfb-497b-8887-7e6fd4bec83c container secret-volume-test: STEP: delete the pod May 14 22:11:50.973: INFO: Waiting for pod pod-projected-secrets-3cd66b54-5bfb-497b-8887-7e6fd4bec83c to disappear May 14 22:11:50.984: INFO: Pod pod-projected-secrets-3cd66b54-5bfb-497b-8887-7e6fd4bec83c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:11:50.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5399" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3335,"failed":0} ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:11:50.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:11:57.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4001" for this suite. STEP: Destroying namespace "nsdeletetest-8806" for this suite. May 14 22:11:57.459: INFO: Namespace nsdeletetest-8806 was already deleted STEP: Destroying namespace "nsdeletetest-5069" for this suite. • [SLOW TEST:6.472 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":197,"skipped":3335,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:11:57.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 14 22:11:57.540: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix748641355/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:11:57.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7628" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":198,"skipped":3337,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:11:57.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 14 22:12:01.762: INFO: &Pod{ObjectMeta:{send-events-4e2affd8-3f7f-43d7-9610-48825eacbc54 events-2813 /api/v1/namespaces/events-2813/pods/send-events-4e2affd8-3f7f-43d7-9610-48825eacbc54 2344b532-605e-444d-b210-54a6fe83dc67 16220832 0 2020-05-14 22:11:57 +0000 UTC map[name:foo time:725013817] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w5x7j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w5x7j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w5x7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:11:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:12:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:12:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:11:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.2,StartTime:2020-05-14 22:11:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 22:12:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://6d7fe76eedb8d5187cd981ebaf326fc08f4454115a2f35b3b73fa43bb0d5e1ed,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 14 22:12:03.767: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 14 22:12:05.770: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:12:05.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2813" for this suite. • [SLOW TEST:8.198 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":199,"skipped":3341,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:12:05.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 14 22:12:05.928: INFO: Waiting up to 5m0s for pod "downward-api-88c84f9c-e529-4e41-bc16-995856de5a9b" in namespace "downward-api-6313" to be "success or failure" May 14 22:12:05.931: INFO: Pod "downward-api-88c84f9c-e529-4e41-bc16-995856de5a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.215566ms May 14 22:12:07.991: INFO: Pod "downward-api-88c84f9c-e529-4e41-bc16-995856de5a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063409891s May 14 22:12:09.995: INFO: Pod "downward-api-88c84f9c-e529-4e41-bc16-995856de5a9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067576461s STEP: Saw pod success May 14 22:12:09.995: INFO: Pod "downward-api-88c84f9c-e529-4e41-bc16-995856de5a9b" satisfied condition "success or failure" May 14 22:12:09.998: INFO: Trying to get logs from node jerma-worker pod downward-api-88c84f9c-e529-4e41-bc16-995856de5a9b container dapi-container: STEP: delete the pod May 14 22:12:10.069: INFO: Waiting for pod downward-api-88c84f9c-e529-4e41-bc16-995856de5a9b to disappear May 14 22:12:10.111: INFO: Pod downward-api-88c84f9c-e529-4e41-bc16-995856de5a9b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:12:10.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6313" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3346,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:12:10.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 14 22:12:10.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 14 22:12:10.507: INFO: stderr: "" May 14 22:12:10.507: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:12:10.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4637" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":201,"skipped":3348,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:12:10.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 22:12:11.316: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 22:12:13.377: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091131, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091131, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091131, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091131, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 22:12:16.419: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 22:12:16.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4336-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:12:17.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5467" for this suite. STEP: Destroying namespace "webhook-5467-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.602 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":202,"skipped":3351,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:12:18.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 14 22:12:18.423: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bde525d7-a867-416b-a205-c46133cc4e6a" in namespace "downward-api-7649" to be "success or failure" May 14 22:12:18.761: INFO: Pod "downwardapi-volume-bde525d7-a867-416b-a205-c46133cc4e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 337.309958ms May 14 22:12:20.765: INFO: Pod "downwardapi-volume-bde525d7-a867-416b-a205-c46133cc4e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.341433218s May 14 22:12:22.789: INFO: Pod "downwardapi-volume-bde525d7-a867-416b-a205-c46133cc4e6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.365870716s STEP: Saw pod success May 14 22:12:22.789: INFO: Pod "downwardapi-volume-bde525d7-a867-416b-a205-c46133cc4e6a" satisfied condition "success or failure" May 14 22:12:22.792: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-bde525d7-a867-416b-a205-c46133cc4e6a container client-container: STEP: delete the pod May 14 22:12:22.814: INFO: Waiting for pod downwardapi-volume-bde525d7-a867-416b-a205-c46133cc4e6a to disappear May 14 22:12:22.818: INFO: Pod downwardapi-volume-bde525d7-a867-416b-a205-c46133cc4e6a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:12:22.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7649" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3360,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:12:22.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4500, will wait for the garbage collector to delete the pods May 14 22:12:27.040: INFO: Deleting Job.batch foo took: 6.416877ms May 14 22:12:27.141: INFO: Terminating Job.batch foo pods took: 100.285264ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:13:02.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4500" for this suite. • [SLOW TEST:39.327 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":204,"skipped":3378,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:13:02.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 22:13:02.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7367' May 14 22:13:02.519: INFO: stderr: "" May 14 22:13:02.519: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 14 22:13:02.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7367' May 14 22:13:02.859: INFO: stderr: "" May 14 22:13:02.859: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 14 22:13:03.864: INFO: Selector matched 1 pods for map[app:agnhost] May 14 22:13:03.864: INFO: Found 0 / 1 May 14 22:13:04.970: INFO: Selector matched 1 pods for map[app:agnhost] May 14 22:13:04.970: INFO: Found 0 / 1 May 14 22:13:05.862: INFO: Selector matched 1 pods for map[app:agnhost] May 14 22:13:05.863: INFO: Found 0 / 1 May 14 22:13:06.879: INFO: Selector matched 1 pods for map[app:agnhost] May 14 22:13:06.879: INFO: Found 1 / 1 May 14 22:13:06.879: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 14 22:13:06.882: INFO: Selector matched 1 pods for map[app:agnhost] May 14 22:13:06.882: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 14 22:13:06.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-8ntkh --namespace=kubectl-7367' May 14 22:13:06.978: INFO: stderr: "" May 14 22:13:06.978: INFO: stdout: "Name: agnhost-master-8ntkh\nNamespace: kubectl-7367\nPriority: 0\nNode: jerma-worker/172.17.0.10\nStart Time: Thu, 14 May 2020 22:13:02 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.162\nIPs:\n IP: 10.244.1.162\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://e49bfb08237ecbd50fef069d544347f45214667fee1633aaf78ca324994c4a2d\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 14 May 2020 22:13:05 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-vjccb (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-vjccb:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-vjccb\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-7367/agnhost-master-8ntkh to jerma-worker\n Normal Pulled 3s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker Started container agnhost-master\n" May 14 22:13:06.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-7367' May 14 22:13:07.077: INFO: stderr: "" May 14 22:13:07.077: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7367\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-8ntkh\n" May 14 22:13:07.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-7367' May 14 22:13:07.169: INFO: stderr: "" May 14 22:13:07.169: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7367\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.102.205.243\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.162:6379\nSession Affinity: None\nEvents: \n" May 14 22:13:07.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 14 22:13:07.336: INFO: stderr: "" May 14 22:13:07.336: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Thu, 14 May 2020 22:13:03 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 14 May 2020 22:11:07 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 14 May 2020 22:11:07 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 14 May 2020 22:11:07 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 14 May 2020 22:11:07 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 60d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 60d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 60d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 60d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 60d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 60d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 60d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 60d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 60d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 14 22:13:07.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7367' May 14 22:13:07.446: INFO: stderr: "" May 14 22:13:07.446: INFO: stdout: "Name: kubectl-7367\nLabels: e2e-framework=kubectl\n e2e-run=1ca4ff6d-d3f2-43f3-b99d-b6d492fdc766\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:13:07.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7367" for this suite. • [SLOW TEST:5.298 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":205,"skipped":3383,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:13:07.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 14 22:13:07.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9215' May 14 22:13:07.778: INFO: stderr: "" May 14 22:13:07.778: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 22:13:07.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9215' May 14 22:13:07.981: INFO: stderr: "" May 14 22:13:07.981: INFO: stdout: "update-demo-nautilus-j5m46 update-demo-nautilus-m68hp " May 14 22:13:07.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j5m46 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9215' May 14 22:13:08.067: INFO: stderr: "" May 14 22:13:08.067: INFO: stdout: "" May 14 22:13:08.067: INFO: update-demo-nautilus-j5m46 is created but not running May 14 22:13:13.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9215' May 14 22:13:13.172: INFO: stderr: "" May 14 22:13:13.172: INFO: stdout: "update-demo-nautilus-j5m46 update-demo-nautilus-m68hp " May 14 22:13:13.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j5m46 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9215' May 14 22:13:13.267: INFO: stderr: "" May 14 22:13:13.267: INFO: stdout: "true" May 14 22:13:13.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j5m46 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9215' May 14 22:13:13.358: INFO: stderr: "" May 14 22:13:13.358: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 22:13:13.358: INFO: validating pod update-demo-nautilus-j5m46 May 14 22:13:13.362: INFO: got data: { "image": "nautilus.jpg" } May 14 22:13:13.362: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 22:13:13.362: INFO: update-demo-nautilus-j5m46 is verified up and running May 14 22:13:13.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m68hp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9215' May 14 22:13:13.450: INFO: stderr: "" May 14 22:13:13.450: INFO: stdout: "true" May 14 22:13:13.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m68hp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9215' May 14 22:13:13.546: INFO: stderr: "" May 14 22:13:13.546: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 22:13:13.546: INFO: validating pod update-demo-nautilus-m68hp May 14 22:13:13.550: INFO: got data: { "image": "nautilus.jpg" } May 14 22:13:13.550: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 22:13:13.550: INFO: update-demo-nautilus-m68hp is verified up and running STEP: scaling down the replication controller May 14 22:13:13.552: INFO: scanned /root for discovery docs: May 14 22:13:13.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9215' May 14 22:13:14.687: INFO: stderr: "" May 14 22:13:14.687: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 22:13:14.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9215' May 14 22:13:14.794: INFO: stderr: "" May 14 22:13:14.794: INFO: stdout: "update-demo-nautilus-j5m46 update-demo-nautilus-m68hp " STEP: Replicas for name=update-demo: expected=1 actual=2 May 14 22:13:19.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9215' May 14 22:13:19.930: INFO: stderr: "" May 14 22:13:19.930: INFO: stdout: "update-demo-nautilus-j5m46 " May 14 22:13:19.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j5m46 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9215' May 14 22:13:20.017: INFO: stderr: "" May 14 22:13:20.017: INFO: stdout: "true" May 14 22:13:20.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j5m46 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9215' May 14 22:13:20.104: INFO: stderr: "" May 14 22:13:20.104: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 22:13:20.104: INFO: validating pod update-demo-nautilus-j5m46 May 14 22:13:20.107: INFO: got data: { "image": "nautilus.jpg" } May 14 22:13:20.107: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 22:13:20.107: INFO: update-demo-nautilus-j5m46 is verified up and running STEP: scaling up the replication controller May 14 22:13:20.110: INFO: scanned /root for discovery docs: May 14 22:13:20.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9215' May 14 22:13:21.234: INFO: stderr: "" May 14 22:13:21.234: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 22:13:21.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9215' May 14 22:13:21.728: INFO: stderr: "" May 14 22:13:21.728: INFO: stdout: "update-demo-nautilus-j5m46 update-demo-nautilus-t5hqj " May 14 22:13:21.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j5m46 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9215' May 14 22:13:22.333: INFO: stderr: "" May 14 22:13:22.333: INFO: stdout: "true" May 14 22:13:22.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j5m46 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9215' May 14 22:13:22.422: INFO: stderr: "" May 14 22:13:22.422: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 22:13:22.422: INFO: validating pod update-demo-nautilus-j5m46 May 14 22:13:22.425: INFO: got data: { "image": "nautilus.jpg" } May 14 22:13:22.425: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 22:13:22.425: INFO: update-demo-nautilus-j5m46 is verified up and running May 14 22:13:22.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t5hqj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9215' May 14 22:13:22.540: INFO: stderr: "" May 14 22:13:22.540: INFO: stdout: "" May 14 22:13:22.540: INFO: update-demo-nautilus-t5hqj is created but not running May 14 22:13:27.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9215' May 14 22:13:27.644: INFO: stderr: "" May 14 22:13:27.644: INFO: stdout: "update-demo-nautilus-j5m46 update-demo-nautilus-t5hqj " May 14 22:13:27.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j5m46 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9215' May 14 22:13:27.737: INFO: stderr: "" May 14 22:13:27.737: INFO: stdout: "true" May 14 22:13:27.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j5m46 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9215' May 14 22:13:27.838: INFO: stderr: "" May 14 22:13:27.838: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 22:13:27.838: INFO: validating pod update-demo-nautilus-j5m46 May 14 22:13:27.842: INFO: got data: { "image": "nautilus.jpg" } May 14 22:13:27.842: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 22:13:27.842: INFO: update-demo-nautilus-j5m46 is verified up and running May 14 22:13:27.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t5hqj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9215' May 14 22:13:27.951: INFO: stderr: "" May 14 22:13:27.951: INFO: stdout: "true" May 14 22:13:27.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t5hqj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9215' May 14 22:13:28.047: INFO: stderr: "" May 14 22:13:28.047: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 22:13:28.047: INFO: validating pod update-demo-nautilus-t5hqj May 14 22:13:28.051: INFO: got data: { "image": "nautilus.jpg" } May 14 22:13:28.051: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 22:13:28.051: INFO: update-demo-nautilus-t5hqj is verified up and running STEP: using delete to clean up resources May 14 22:13:28.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9215' May 14 22:13:28.161: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 22:13:28.161: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 14 22:13:28.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9215' May 14 22:13:28.256: INFO: stderr: "No resources found in kubectl-9215 namespace.\n" May 14 22:13:28.256: INFO: stdout: "" May 14 22:13:28.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9215 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 14 22:13:28.570: INFO: stderr: "" May 14 22:13:28.570: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:13:28.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9215" for this suite. • [SLOW TEST:21.379 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":206,"skipped":3411,"failed":0} [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:13:28.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 14 22:13:28.925: INFO: Waiting up to 5m0s for pod "downwardapi-volume-190f8b2f-48cb-4e50-9238-fc3046a72486" in namespace "downward-api-7932" to be "success or failure" May 14 22:13:28.936: INFO: Pod "downwardapi-volume-190f8b2f-48cb-4e50-9238-fc3046a72486": Phase="Pending", Reason="", readiness=false. Elapsed: 10.087333ms May 14 22:13:31.587: INFO: Pod "downwardapi-volume-190f8b2f-48cb-4e50-9238-fc3046a72486": Phase="Pending", Reason="", readiness=false. Elapsed: 2.661965086s May 14 22:13:33.591: INFO: Pod "downwardapi-volume-190f8b2f-48cb-4e50-9238-fc3046a72486": Phase="Running", Reason="", readiness=true. Elapsed: 4.665650084s May 14 22:13:35.594: INFO: Pod "downwardapi-volume-190f8b2f-48cb-4e50-9238-fc3046a72486": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.668885708s STEP: Saw pod success May 14 22:13:35.594: INFO: Pod "downwardapi-volume-190f8b2f-48cb-4e50-9238-fc3046a72486" satisfied condition "success or failure" May 14 22:13:35.597: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-190f8b2f-48cb-4e50-9238-fc3046a72486 container client-container: STEP: delete the pod May 14 22:13:35.638: INFO: Waiting for pod downwardapi-volume-190f8b2f-48cb-4e50-9238-fc3046a72486 to disappear May 14 22:13:35.654: INFO: Pod downwardapi-volume-190f8b2f-48cb-4e50-9238-fc3046a72486 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:13:35.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7932" for this suite. • [SLOW TEST:6.871 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3411,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:13:35.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-6edd7ed8-65e7-4c1a-9635-6d62fbba76af STEP: Creating a pod to test consume configMaps May 14 22:13:35.926: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d8f604c9-8df3-4a3d-b195-1480e54b2820" in namespace "projected-1462" to be "success or failure" May 14 22:13:35.930: INFO: Pod "pod-projected-configmaps-d8f604c9-8df3-4a3d-b195-1480e54b2820": Phase="Pending", Reason="", readiness=false. Elapsed: 3.97232ms May 14 22:13:38.012: INFO: Pod "pod-projected-configmaps-d8f604c9-8df3-4a3d-b195-1480e54b2820": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086258198s May 14 22:13:40.016: INFO: Pod "pod-projected-configmaps-d8f604c9-8df3-4a3d-b195-1480e54b2820": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090606521s STEP: Saw pod success May 14 22:13:40.017: INFO: Pod "pod-projected-configmaps-d8f604c9-8df3-4a3d-b195-1480e54b2820" satisfied condition "success or failure" May 14 22:13:40.020: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-d8f604c9-8df3-4a3d-b195-1480e54b2820 container projected-configmap-volume-test: STEP: delete the pod May 14 22:13:40.058: INFO: Waiting for pod pod-projected-configmaps-d8f604c9-8df3-4a3d-b195-1480e54b2820 to disappear May 14 22:13:40.073: INFO: Pod pod-projected-configmaps-d8f604c9-8df3-4a3d-b195-1480e54b2820 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:13:40.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1462" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3450,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:13:40.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 22:13:40.175: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 14 22:13:40.181: INFO: Number of nodes with available pods: 0 May 14 22:13:40.181: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 14 22:13:40.245: INFO: Number of nodes with available pods: 0 May 14 22:13:40.245: INFO: Node jerma-worker is running more than one daemon pod May 14 22:13:41.250: INFO: Number of nodes with available pods: 0 May 14 22:13:41.250: INFO: Node jerma-worker is running more than one daemon pod May 14 22:13:42.253: INFO: Number of nodes with available pods: 0 May 14 22:13:42.253: INFO: Node jerma-worker is running more than one daemon pod May 14 22:13:43.249: INFO: Number of nodes with available pods: 1 May 14 22:13:43.249: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 14 22:13:43.302: INFO: Number of nodes with available pods: 1 May 14 22:13:43.302: INFO: Number of running nodes: 0, number of available pods: 1 May 14 22:13:44.307: INFO: Number of nodes with available pods: 0 May 14 22:13:44.307: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 14 22:13:44.354: INFO: Number of nodes with available pods: 0 May 14 22:13:44.354: INFO: Node jerma-worker is running more than one daemon pod May 14 22:13:45.397: INFO: Number of nodes with available pods: 0 May 14 22:13:45.397: INFO: Node jerma-worker is running more than one daemon pod May 14 22:13:46.357: INFO: Number of nodes with available pods: 0 May 14 22:13:46.357: INFO: Node jerma-worker is running more than one daemon pod May 14 22:13:47.357: INFO: Number of nodes with available pods: 0 May 14 22:13:47.357: INFO: Node jerma-worker is running more than one daemon pod May 14 22:13:48.492: INFO: Number of nodes with available pods: 0 May 14 22:13:48.492: INFO: Node jerma-worker is running more than one daemon pod May 14 22:13:49.358: INFO: Number of nodes with available pods: 0 May 14 22:13:49.358: INFO: Node jerma-worker is running more than one daemon pod May 14 22:13:50.358: INFO: Number of nodes with available pods: 0 May 14 22:13:50.358: INFO: Node jerma-worker is running more than one daemon pod May 14 22:13:51.358: INFO: Number of nodes with available pods: 1 May 14 22:13:51.358: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9985, will wait for the garbage collector to delete the pods May 14 22:13:51.436: INFO: Deleting DaemonSet.extensions daemon-set took: 5.946001ms May 14 22:13:51.736: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.195931ms May 14 22:13:59.238: INFO: Number of nodes with available pods: 0 May 14 22:13:59.238: INFO: Number of running nodes: 0, number of available pods: 0 May 14 22:13:59.240: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9985/daemonsets","resourceVersion":"16221612"},"items":null} May 14 22:13:59.241: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9985/pods","resourceVersion":"16221612"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:13:59.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9985" for this suite. • [SLOW TEST:19.216 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":209,"skipped":3486,"failed":0} S ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:13:59.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8919 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8919;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8919 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8919;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8919.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8919.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8919.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8919.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8919.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8919.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8919.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8919.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8919.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8919.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8919.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8919.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8919.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 44.113.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.113.44_udp@PTR;check="$$(dig +tcp +noall +answer +search 44.113.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.113.44_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8919 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8919;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8919 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8919;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8919.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8919.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8919.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8919.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8919.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8919.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8919.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8919.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8919.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8919.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8919.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8919.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8919.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 44.113.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.113.44_udp@PTR;check="$$(dig +tcp +noall +answer +search 44.113.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.113.44_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 22:14:05.488: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:05.493: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:05.496: INFO: Unable to read wheezy_udp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:05.498: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:05.501: INFO: Unable to read wheezy_udp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:05.504: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:05.506: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:05.508: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:05.526: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:05.528: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:05.531: INFO: Unable to read jessie_udp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:05.534: INFO: Unable to read jessie_tcp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:05.537: INFO: Unable to read jessie_udp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:05.540: INFO: Unable to read jessie_tcp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:05.542: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:05.545: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:05.564: INFO: Lookups using dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8919 wheezy_tcp@dns-test-service.dns-8919 wheezy_udp@dns-test-service.dns-8919.svc wheezy_tcp@dns-test-service.dns-8919.svc wheezy_udp@_http._tcp.dns-test-service.dns-8919.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8919.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8919 jessie_tcp@dns-test-service.dns-8919 jessie_udp@dns-test-service.dns-8919.svc jessie_tcp@dns-test-service.dns-8919.svc jessie_udp@_http._tcp.dns-test-service.dns-8919.svc jessie_tcp@_http._tcp.dns-test-service.dns-8919.svc] May 14 22:14:10.569: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:10.572: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:10.575: INFO: Unable to read wheezy_udp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:10.579: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:10.582: INFO: Unable to read wheezy_udp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:10.585: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:10.587: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:10.589: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:10.606: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:10.608: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:10.610: INFO: Unable to read jessie_udp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:10.612: INFO: Unable to read jessie_tcp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:10.615: INFO: Unable to read jessie_udp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:10.617: INFO: Unable to read jessie_tcp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:10.620: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:10.623: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:10.638: INFO: Lookups using dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8919 wheezy_tcp@dns-test-service.dns-8919 wheezy_udp@dns-test-service.dns-8919.svc wheezy_tcp@dns-test-service.dns-8919.svc wheezy_udp@_http._tcp.dns-test-service.dns-8919.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8919.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8919 jessie_tcp@dns-test-service.dns-8919 jessie_udp@dns-test-service.dns-8919.svc jessie_tcp@dns-test-service.dns-8919.svc jessie_udp@_http._tcp.dns-test-service.dns-8919.svc jessie_tcp@_http._tcp.dns-test-service.dns-8919.svc] May 14 22:14:15.569: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:15.573: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:15.577: INFO: Unable to read wheezy_udp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:15.580: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:15.583: INFO: Unable to read wheezy_udp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:15.586: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:15.590: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:15.618: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:15.640: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:15.642: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:15.645: INFO: Unable to read jessie_udp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:15.648: INFO: Unable to read jessie_tcp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:15.651: INFO: Unable to read jessie_udp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:15.653: INFO: Unable to read jessie_tcp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:15.656: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:15.659: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:15.679: INFO: Lookups using dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8919 wheezy_tcp@dns-test-service.dns-8919 wheezy_udp@dns-test-service.dns-8919.svc wheezy_tcp@dns-test-service.dns-8919.svc wheezy_udp@_http._tcp.dns-test-service.dns-8919.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8919.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8919 jessie_tcp@dns-test-service.dns-8919 jessie_udp@dns-test-service.dns-8919.svc jessie_tcp@dns-test-service.dns-8919.svc jessie_udp@_http._tcp.dns-test-service.dns-8919.svc jessie_tcp@_http._tcp.dns-test-service.dns-8919.svc] May 14 22:14:20.569: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:20.572: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:20.575: INFO: Unable to read wheezy_udp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:20.578: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:20.580: INFO: Unable to read wheezy_udp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:20.583: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:20.586: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:20.588: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:20.627: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:20.629: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:20.632: INFO: Unable to read jessie_udp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:20.634: INFO: Unable to read jessie_tcp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:20.636: INFO: Unable to read jessie_udp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:20.638: INFO: Unable to read jessie_tcp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:20.641: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:20.643: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:20.663: INFO: Lookups using dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8919 wheezy_tcp@dns-test-service.dns-8919 wheezy_udp@dns-test-service.dns-8919.svc wheezy_tcp@dns-test-service.dns-8919.svc wheezy_udp@_http._tcp.dns-test-service.dns-8919.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8919.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8919 jessie_tcp@dns-test-service.dns-8919 jessie_udp@dns-test-service.dns-8919.svc jessie_tcp@dns-test-service.dns-8919.svc jessie_udp@_http._tcp.dns-test-service.dns-8919.svc jessie_tcp@_http._tcp.dns-test-service.dns-8919.svc] May 14 22:14:25.567: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:25.570: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:25.573: INFO: Unable to read wheezy_udp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:25.575: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:25.578: INFO: Unable to read wheezy_udp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:25.580: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:25.582: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:25.584: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:25.604: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:25.606: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:25.608: INFO: Unable to read jessie_udp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:25.610: INFO: Unable to read jessie_tcp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:25.612: INFO: Unable to read jessie_udp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:25.615: INFO: Unable to read jessie_tcp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:25.617: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:25.619: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:25.687: INFO: Lookups using dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8919 wheezy_tcp@dns-test-service.dns-8919 wheezy_udp@dns-test-service.dns-8919.svc wheezy_tcp@dns-test-service.dns-8919.svc wheezy_udp@_http._tcp.dns-test-service.dns-8919.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8919.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8919 jessie_tcp@dns-test-service.dns-8919 jessie_udp@dns-test-service.dns-8919.svc jessie_tcp@dns-test-service.dns-8919.svc jessie_udp@_http._tcp.dns-test-service.dns-8919.svc jessie_tcp@_http._tcp.dns-test-service.dns-8919.svc] May 14 22:14:30.568: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:30.571: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:30.624: INFO: Unable to read wheezy_udp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:30.627: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:30.630: INFO: Unable to read wheezy_udp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:30.632: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:30.635: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:30.637: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:30.653: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:30.656: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:30.658: INFO: Unable to read jessie_udp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:30.661: INFO: Unable to read jessie_tcp@dns-test-service.dns-8919 from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:30.663: INFO: Unable to read jessie_udp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:30.666: INFO: Unable to read jessie_tcp@dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:30.668: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:30.671: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8919.svc from pod dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770: the server could not find the requested resource (get pods dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770) May 14 22:14:30.687: INFO: Lookups using dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8919 wheezy_tcp@dns-test-service.dns-8919 wheezy_udp@dns-test-service.dns-8919.svc wheezy_tcp@dns-test-service.dns-8919.svc wheezy_udp@_http._tcp.dns-test-service.dns-8919.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8919.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8919 jessie_tcp@dns-test-service.dns-8919 jessie_udp@dns-test-service.dns-8919.svc jessie_tcp@dns-test-service.dns-8919.svc jessie_udp@_http._tcp.dns-test-service.dns-8919.svc jessie_tcp@_http._tcp.dns-test-service.dns-8919.svc] May 14 22:14:35.671: INFO: DNS probes using dns-8919/dns-test-d8e343aa-5f17-4163-bf57-afbf514a2770 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:14:38.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8919" for this suite. • [SLOW TEST:39.045 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":210,"skipped":3487,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:14:38.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 22:14:43.007: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 22:14:45.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091283, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091283, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091283, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091282, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 22:14:47.632: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091283, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091283, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091283, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091282, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 22:14:49.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091283, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091283, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091283, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091282, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 22:14:52.554: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:14:55.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7822" for this suite. STEP: Destroying namespace "webhook-7822-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.513 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":211,"skipped":3535,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:14:55.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 14 22:14:56.064: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a4d14f9-e4b4-46d5-9e81-61771793adaa" in namespace "projected-5581" to be "success or failure" May 14 22:14:56.110: INFO: Pod "downwardapi-volume-8a4d14f9-e4b4-46d5-9e81-61771793adaa": Phase="Pending", Reason="", readiness=false. Elapsed: 45.00904ms May 14 22:14:58.113: INFO: Pod "downwardapi-volume-8a4d14f9-e4b4-46d5-9e81-61771793adaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048899236s May 14 22:15:00.117: INFO: Pod "downwardapi-volume-8a4d14f9-e4b4-46d5-9e81-61771793adaa": Phase="Running", Reason="", readiness=true. Elapsed: 4.052792635s May 14 22:15:02.125: INFO: Pod "downwardapi-volume-8a4d14f9-e4b4-46d5-9e81-61771793adaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060813983s STEP: Saw pod success May 14 22:15:02.125: INFO: Pod "downwardapi-volume-8a4d14f9-e4b4-46d5-9e81-61771793adaa" satisfied condition "success or failure" May 14 22:15:02.128: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8a4d14f9-e4b4-46d5-9e81-61771793adaa container client-container: STEP: delete the pod May 14 22:15:02.309: INFO: Waiting for pod downwardapi-volume-8a4d14f9-e4b4-46d5-9e81-61771793adaa to disappear May 14 22:15:02.352: INFO: Pod downwardapi-volume-8a4d14f9-e4b4-46d5-9e81-61771793adaa no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:15:02.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5581" for this suite. • [SLOW TEST:6.502 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3544,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:15:02.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 14 22:15:02.861: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:03.033: INFO: Number of nodes with available pods: 0 May 14 22:15:03.033: INFO: Node jerma-worker is running more than one daemon pod May 14 22:15:04.037: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:04.040: INFO: Number of nodes with available pods: 0 May 14 22:15:04.040: INFO: Node jerma-worker is running more than one daemon pod May 14 22:15:05.037: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:05.040: INFO: Number of nodes with available pods: 0 May 14 22:15:05.040: INFO: Node jerma-worker is running more than one daemon pod May 14 22:15:06.038: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:06.042: INFO: Number of nodes with available pods: 0 May 14 22:15:06.042: INFO: Node jerma-worker is running more than one daemon pod May 14 22:15:07.039: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:07.042: INFO: Number of nodes with available pods: 0 May 14 22:15:07.042: INFO: Node jerma-worker is running more than one daemon pod May 14 22:15:08.039: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:08.043: INFO: Number of nodes with available pods: 2 May 14 22:15:08.043: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 14 22:15:08.117: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:08.120: INFO: Number of nodes with available pods: 1 May 14 22:15:08.120: INFO: Node jerma-worker2 is running more than one daemon pod May 14 22:15:09.126: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:09.130: INFO: Number of nodes with available pods: 1 May 14 22:15:09.130: INFO: Node jerma-worker2 is running more than one daemon pod May 14 22:15:10.124: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:10.126: INFO: Number of nodes with available pods: 1 May 14 22:15:10.126: INFO: Node jerma-worker2 is running more than one daemon pod May 14 22:15:11.124: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:11.128: INFO: Number of nodes with available pods: 1 May 14 22:15:11.128: INFO: Node jerma-worker2 is running more than one daemon pod May 14 22:15:12.124: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:12.128: INFO: Number of nodes with available pods: 1 May 14 22:15:12.128: INFO: Node jerma-worker2 is running more than one daemon pod May 14 22:15:13.125: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:13.127: INFO: Number of nodes with available pods: 1 May 14 22:15:13.127: INFO: Node jerma-worker2 is running more than one daemon pod May 14 22:15:14.125: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:14.128: INFO: Number of nodes with available pods: 1 May 14 22:15:14.128: INFO: Node jerma-worker2 is running more than one daemon pod May 14 22:15:15.125: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:15.129: INFO: Number of nodes with available pods: 1 May 14 22:15:15.129: INFO: Node jerma-worker2 is running more than one daemon pod May 14 22:15:16.124: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:16.127: INFO: Number of nodes with available pods: 1 May 14 22:15:16.127: INFO: Node jerma-worker2 is running more than one daemon pod May 14 22:15:17.124: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:17.126: INFO: Number of nodes with available pods: 1 May 14 22:15:17.126: INFO: Node jerma-worker2 is running more than one daemon pod May 14 22:15:18.125: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:18.129: INFO: Number of nodes with available pods: 1 May 14 22:15:18.129: INFO: Node jerma-worker2 is running more than one daemon pod May 14 22:15:19.125: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:19.129: INFO: Number of nodes with available pods: 1 May 14 22:15:19.129: INFO: Node jerma-worker2 is running more than one daemon pod May 14 22:15:20.125: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:20.129: INFO: Number of nodes with available pods: 1 May 14 22:15:20.129: INFO: Node jerma-worker2 is running more than one daemon pod May 14 22:15:21.164: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:21.168: INFO: Number of nodes with available pods: 1 May 14 22:15:21.168: INFO: Node jerma-worker2 is running more than one daemon pod May 14 22:15:22.125: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:22.127: INFO: Number of nodes with available pods: 1 May 14 22:15:22.127: INFO: Node jerma-worker2 is running more than one daemon pod May 14 22:15:23.123: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 22:15:23.126: INFO: Number of nodes with available pods: 2 May 14 22:15:23.126: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7081, will wait for the garbage collector to delete the pods May 14 22:15:23.188: INFO: Deleting DaemonSet.extensions daemon-set took: 6.654035ms May 14 22:15:23.489: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.298694ms May 14 22:15:29.624: INFO: Number of nodes with available pods: 0 May 14 22:15:29.624: INFO: Number of running nodes: 0, number of available pods: 0 May 14 22:15:29.627: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7081/daemonsets","resourceVersion":"16222095"},"items":null} May 14 22:15:29.630: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7081/pods","resourceVersion":"16222095"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:15:29.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7081" for this suite. • [SLOW TEST:27.286 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":213,"skipped":3547,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:15:29.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 22:15:30.939: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 22:15:32.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091330, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091330, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091331, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091330, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 22:15:34.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091330, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091330, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091331, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091330, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 22:15:37.995: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:15:38.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2725" for this suite. STEP: Destroying namespace "webhook-2725-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.175 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":214,"skipped":3557,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:15:38.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5383 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5383 STEP: creating replication controller externalsvc in namespace services-5383 I0514 22:15:39.136733 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5383, replica count: 2 I0514 22:15:42.187228 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 22:15:45.187475 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 14 22:15:45.230: INFO: Creating new exec pod May 14 22:15:49.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5383 execpodrp4x6 -- /bin/sh -x -c nslookup clusterip-service' May 14 22:15:49.609: INFO: stderr: "I0514 22:15:49.402578 4155 log.go:172] (0xc000962630) (0xc00067fcc0) Create stream\nI0514 22:15:49.402630 4155 log.go:172] (0xc000962630) (0xc00067fcc0) Stream added, broadcasting: 1\nI0514 22:15:49.405311 4155 log.go:172] (0xc000962630) Reply frame received for 1\nI0514 22:15:49.405372 4155 log.go:172] (0xc000962630) (0xc00059c500) Create stream\nI0514 22:15:49.405395 4155 log.go:172] (0xc000962630) (0xc00059c500) Stream added, broadcasting: 3\nI0514 22:15:49.406332 4155 log.go:172] (0xc000962630) Reply frame received for 3\nI0514 22:15:49.406389 4155 log.go:172] (0xc000962630) (0xc0003e72c0) Create stream\nI0514 22:15:49.406406 4155 log.go:172] (0xc000962630) (0xc0003e72c0) Stream added, broadcasting: 5\nI0514 22:15:49.407286 4155 log.go:172] (0xc000962630) Reply frame received for 5\nI0514 22:15:49.503229 4155 log.go:172] (0xc000962630) Data frame received for 5\nI0514 22:15:49.503277 4155 log.go:172] (0xc0003e72c0) (5) Data frame handling\nI0514 22:15:49.503305 4155 log.go:172] (0xc0003e72c0) (5) Data frame sent\n+ nslookup clusterip-service\nI0514 22:15:49.598382 4155 log.go:172] (0xc000962630) Data frame received for 3\nI0514 22:15:49.598409 4155 log.go:172] (0xc00059c500) (3) Data frame handling\nI0514 22:15:49.598426 4155 log.go:172] (0xc00059c500) (3) Data frame sent\nI0514 22:15:49.600024 4155 log.go:172] (0xc000962630) Data frame received for 3\nI0514 22:15:49.600040 4155 log.go:172] (0xc00059c500) (3) Data frame handling\nI0514 22:15:49.600051 4155 log.go:172] (0xc00059c500) (3) Data frame sent\nI0514 22:15:49.600829 4155 log.go:172] (0xc000962630) Data frame received for 3\nI0514 22:15:49.600846 4155 log.go:172] (0xc00059c500) (3) Data frame handling\nI0514 22:15:49.601396 4155 log.go:172] (0xc000962630) Data frame received for 5\nI0514 22:15:49.601428 4155 log.go:172] (0xc0003e72c0) (5) Data frame handling\nI0514 22:15:49.603291 4155 log.go:172] (0xc000962630) Data frame received for 1\nI0514 22:15:49.603310 4155 log.go:172] (0xc00067fcc0) (1) Data frame handling\nI0514 22:15:49.603320 4155 log.go:172] (0xc00067fcc0) (1) Data frame sent\nI0514 22:15:49.603342 4155 log.go:172] (0xc000962630) (0xc00067fcc0) Stream removed, broadcasting: 1\nI0514 22:15:49.603457 4155 log.go:172] (0xc000962630) Go away received\nI0514 22:15:49.603719 4155 log.go:172] (0xc000962630) (0xc00067fcc0) Stream removed, broadcasting: 1\nI0514 22:15:49.603737 4155 log.go:172] (0xc000962630) (0xc00059c500) Stream removed, broadcasting: 3\nI0514 22:15:49.603744 4155 log.go:172] (0xc000962630) (0xc0003e72c0) Stream removed, broadcasting: 5\n" May 14 22:15:49.609: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5383.svc.cluster.local\tcanonical name = externalsvc.services-5383.svc.cluster.local.\nName:\texternalsvc.services-5383.svc.cluster.local\nAddress: 10.105.49.68\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5383, will wait for the garbage collector to delete the pods May 14 22:15:49.669: INFO: Deleting ReplicationController externalsvc took: 6.400244ms May 14 22:15:49.769: INFO: Terminating ReplicationController externalsvc pods took: 100.254251ms May 14 22:15:59.615: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:15:59.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5383" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:20.820 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":215,"skipped":3600,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:15:59.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-a8cb3a5b-537d-441d-b7a2-b5d7fccb2265 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:16:05.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8801" for this suite. • [SLOW TEST:6.323 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3605,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:16:05.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 14 22:16:06.045: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 14 22:16:06.060: INFO: Waiting for terminating namespaces to be deleted... May 14 22:16:06.063: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 14 22:16:06.066: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 22:16:06.066: INFO: Container kindnet-cni ready: true, restart count 0 May 14 22:16:06.066: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 22:16:06.066: INFO: Container kube-proxy ready: true, restart count 0 May 14 22:16:06.066: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 14 22:16:06.070: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 14 22:16:06.070: INFO: Container kube-hunter ready: false, restart count 0 May 14 22:16:06.070: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 22:16:06.070: INFO: Container kindnet-cni ready: true, restart count 0 May 14 22:16:06.070: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 14 22:16:06.070: INFO: Container kube-bench ready: false, restart count 0 May 14 22:16:06.070: INFO: pod-configmaps-f499d3b8-5404-4e9c-89d6-bf25643dd6cf from configmap-8801 started at 2020-05-14 22:15:59 +0000 UTC (2 container statuses recorded) May 14 22:16:06.070: INFO: Container configmap-volume-binary-test ready: false, restart count 0 May 14 22:16:06.070: INFO: Container configmap-volume-data-test ready: true, restart count 0 May 14 22:16:06.070: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 22:16:06.070: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-54697bf2-1f19-4cbc-b2eb-249d8f0cd8ba 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-54697bf2-1f19-4cbc-b2eb-249d8f0cd8ba off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-54697bf2-1f19-4cbc-b2eb-249d8f0cd8ba [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:16:24.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6871" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:18.425 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":217,"skipped":3609,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:16:24.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container May 14 22:16:29.000: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1581 pod-service-account-d94fd12f-175a-41c9-b06f-bfd9a9afaf60 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 14 22:16:29.237: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1581 pod-service-account-d94fd12f-175a-41c9-b06f-bfd9a9afaf60 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 14 22:16:29.432: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1581 pod-service-account-d94fd12f-175a-41c9-b06f-bfd9a9afaf60 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:16:29.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1581" for this suite. • [SLOW TEST:5.271 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":218,"skipped":3633,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:16:29.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 14 22:16:29.830: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e2f30e08-b527-4abe-88a6-387418791d28" in namespace "projected-3280" to be "success or failure" May 14 22:16:29.912: INFO: Pod "downwardapi-volume-e2f30e08-b527-4abe-88a6-387418791d28": Phase="Pending", Reason="", readiness=false. Elapsed: 82.326366ms May 14 22:16:31.931: INFO: Pod "downwardapi-volume-e2f30e08-b527-4abe-88a6-387418791d28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100700746s May 14 22:16:34.098: INFO: Pod "downwardapi-volume-e2f30e08-b527-4abe-88a6-387418791d28": Phase="Running", Reason="", readiness=true. Elapsed: 4.268255514s May 14 22:16:36.102: INFO: Pod "downwardapi-volume-e2f30e08-b527-4abe-88a6-387418791d28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.271509464s STEP: Saw pod success May 14 22:16:36.102: INFO: Pod "downwardapi-volume-e2f30e08-b527-4abe-88a6-387418791d28" satisfied condition "success or failure" May 14 22:16:36.104: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-e2f30e08-b527-4abe-88a6-387418791d28 container client-container: STEP: delete the pod May 14 22:16:36.138: INFO: Waiting for pod downwardapi-volume-e2f30e08-b527-4abe-88a6-387418791d28 to disappear May 14 22:16:36.163: INFO: Pod downwardapi-volume-e2f30e08-b527-4abe-88a6-387418791d28 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:16:36.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3280" for this suite. • [SLOW TEST:6.508 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3634,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:16:36.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-9b59028e-4631-4a52-a024-5e0ce7908b64 STEP: Creating a pod to test consume secrets May 14 22:16:36.429: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a245fcce-a978-428d-bed8-8b29a83fadea" in namespace "projected-5646" to be "success or failure" May 14 22:16:36.469: INFO: Pod "pod-projected-secrets-a245fcce-a978-428d-bed8-8b29a83fadea": Phase="Pending", Reason="", readiness=false. Elapsed: 40.224438ms May 14 22:16:38.472: INFO: Pod "pod-projected-secrets-a245fcce-a978-428d-bed8-8b29a83fadea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043411571s May 14 22:16:40.479: INFO: Pod "pod-projected-secrets-a245fcce-a978-428d-bed8-8b29a83fadea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049692221s May 14 22:16:42.483: INFO: Pod "pod-projected-secrets-a245fcce-a978-428d-bed8-8b29a83fadea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05387563s STEP: Saw pod success May 14 22:16:42.483: INFO: Pod "pod-projected-secrets-a245fcce-a978-428d-bed8-8b29a83fadea" satisfied condition "success or failure" May 14 22:16:42.486: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-a245fcce-a978-428d-bed8-8b29a83fadea container projected-secret-volume-test: STEP: delete the pod May 14 22:16:42.579: INFO: Waiting for pod pod-projected-secrets-a245fcce-a978-428d-bed8-8b29a83fadea to disappear May 14 22:16:42.589: INFO: Pod pod-projected-secrets-a245fcce-a978-428d-bed8-8b29a83fadea no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:16:42.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5646" for this suite. • [SLOW TEST:6.426 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3642,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:16:42.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 14 22:16:42.645: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:16:58.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3772" for this suite. • [SLOW TEST:15.706 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":221,"skipped":3654,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:16:58.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 14 22:16:58.407: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2743 /api/v1/namespaces/watch-2743/configmaps/e2e-watch-test-resource-version 5d028276-9c8b-4ac0-8fc2-83793b4a95fa 16222759 0 2020-05-14 22:16:58 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 14 22:16:58.407: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2743 /api/v1/namespaces/watch-2743/configmaps/e2e-watch-test-resource-version 5d028276-9c8b-4ac0-8fc2-83793b4a95fa 16222760 0 2020-05-14 22:16:58 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:16:58.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2743" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":222,"skipped":3658,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:16:58.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 14 22:16:58.526: INFO: Waiting up to 5m0s for pod "pod-5ca18ec4-266d-49b6-9c3c-cf0d6a7d6254" in namespace "emptydir-8519" to be "success or failure" May 14 22:16:58.554: INFO: Pod "pod-5ca18ec4-266d-49b6-9c3c-cf0d6a7d6254": Phase="Pending", Reason="", readiness=false. Elapsed: 28.156118ms May 14 22:17:00.576: INFO: Pod "pod-5ca18ec4-266d-49b6-9c3c-cf0d6a7d6254": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050301622s May 14 22:17:02.580: INFO: Pod "pod-5ca18ec4-266d-49b6-9c3c-cf0d6a7d6254": Phase="Running", Reason="", readiness=true. Elapsed: 4.054011015s May 14 22:17:04.584: INFO: Pod "pod-5ca18ec4-266d-49b6-9c3c-cf0d6a7d6254": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05838796s STEP: Saw pod success May 14 22:17:04.584: INFO: Pod "pod-5ca18ec4-266d-49b6-9c3c-cf0d6a7d6254" satisfied condition "success or failure" May 14 22:17:04.588: INFO: Trying to get logs from node jerma-worker pod pod-5ca18ec4-266d-49b6-9c3c-cf0d6a7d6254 container test-container: STEP: delete the pod May 14 22:17:04.652: INFO: Waiting for pod pod-5ca18ec4-266d-49b6-9c3c-cf0d6a7d6254 to disappear May 14 22:17:04.662: INFO: Pod pod-5ca18ec4-266d-49b6-9c3c-cf0d6a7d6254 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:17:04.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8519" for this suite. • [SLOW TEST:6.256 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3684,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:17:04.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 14 22:17:04.724: INFO: Waiting up to 5m0s for pod "pod-4d06a08a-33c6-4302-b6f1-012ab0cb1663" in namespace "emptydir-7083" to be "success or failure" May 14 22:17:04.728: INFO: Pod "pod-4d06a08a-33c6-4302-b6f1-012ab0cb1663": Phase="Pending", Reason="", readiness=false. Elapsed: 3.591986ms May 14 22:17:06.732: INFO: Pod "pod-4d06a08a-33c6-4302-b6f1-012ab0cb1663": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007670069s May 14 22:17:08.737: INFO: Pod "pod-4d06a08a-33c6-4302-b6f1-012ab0cb1663": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012886074s STEP: Saw pod success May 14 22:17:08.737: INFO: Pod "pod-4d06a08a-33c6-4302-b6f1-012ab0cb1663" satisfied condition "success or failure" May 14 22:17:08.741: INFO: Trying to get logs from node jerma-worker pod pod-4d06a08a-33c6-4302-b6f1-012ab0cb1663 container test-container: STEP: delete the pod May 14 22:17:08.779: INFO: Waiting for pod pod-4d06a08a-33c6-4302-b6f1-012ab0cb1663 to disappear May 14 22:17:08.871: INFO: Pod pod-4d06a08a-33c6-4302-b6f1-012ab0cb1663 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:17:08.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7083" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3721,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:17:08.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 14 22:17:09.161: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 14 22:17:09.214: INFO: Waiting for terminating namespaces to be deleted... May 14 22:17:09.216: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 14 22:17:09.220: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 22:17:09.220: INFO: Container kindnet-cni ready: true, restart count 0 May 14 22:17:09.220: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 22:17:09.220: INFO: Container kube-proxy ready: true, restart count 0 May 14 22:17:09.220: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 14 22:17:09.224: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 22:17:09.224: INFO: Container kindnet-cni ready: true, restart count 0 May 14 22:17:09.224: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 14 22:17:09.224: INFO: Container kube-bench ready: false, restart count 0 May 14 22:17:09.224: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 22:17:09.224: INFO: Container kube-proxy ready: true, restart count 0 May 14 22:17:09.224: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 14 22:17:09.224: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 14 22:17:09.351: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 14 22:17:09.351: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 14 22:17:09.351: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 14 22:17:09.351: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 14 22:17:09.351: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker May 14 22:17:09.365: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-34b815e2-e2ec-4e2d-a357-54fb4402350f.160f04f1a4212320], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8347/filler-pod-34b815e2-e2ec-4e2d-a357-54fb4402350f to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-34b815e2-e2ec-4e2d-a357-54fb4402350f.160f04f1f1bb21f3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-34b815e2-e2ec-4e2d-a357-54fb4402350f.160f04f26758e052], Reason = [Created], Message = [Created container filler-pod-34b815e2-e2ec-4e2d-a357-54fb4402350f] STEP: Considering event: Type = [Normal], Name = [filler-pod-34b815e2-e2ec-4e2d-a357-54fb4402350f.160f04f285fec0c1], Reason = [Started], Message = [Started container filler-pod-34b815e2-e2ec-4e2d-a357-54fb4402350f] STEP: Considering event: Type = [Normal], Name = [filler-pod-7914f737-ae0d-47a1-80f6-94d4282b8fe7.160f04f1a5ed878e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8347/filler-pod-7914f737-ae0d-47a1-80f6-94d4282b8fe7 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-7914f737-ae0d-47a1-80f6-94d4282b8fe7.160f04f231defc98], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-7914f737-ae0d-47a1-80f6-94d4282b8fe7.160f04f28b82eef8], Reason = [Created], Message = [Created container filler-pod-7914f737-ae0d-47a1-80f6-94d4282b8fe7] STEP: Considering event: Type = [Normal], Name = [filler-pod-7914f737-ae0d-47a1-80f6-94d4282b8fe7.160f04f29e61ac4c], Reason = [Started], Message = [Started container filler-pod-7914f737-ae0d-47a1-80f6-94d4282b8fe7] STEP: Considering event: Type = [Warning], Name = [additional-pod.160f04f30c665dc8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:17:16.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8347" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.609 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":225,"skipped":3727,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:17:16.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 22:17:17.330: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 22:17:19.340: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091437, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091437, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091437, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091437, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 22:17:21.344: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091437, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091437, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091437, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091437, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 22:17:24.378: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:17:24.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8087" for this suite. STEP: Destroying namespace "webhook-8087-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.311 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":226,"skipped":3737,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:17:24.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 14 22:17:29.369: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:17:29.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6670" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3862,"failed":0} ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:17:29.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 22:17:29.657: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.990395ms) May 14 22:17:29.664: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 7.088627ms) May 14 22:17:29.670: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.546807ms) May 14 22:17:29.675: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.680792ms) May 14 22:17:29.681: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.951179ms) May 14 22:17:29.685: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.671906ms) May 14 22:17:29.688: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.04126ms) May 14 22:17:29.738: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 50.20543ms) May 14 22:17:29.777: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 38.580478ms) May 14 22:17:29.796: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 18.708084ms) May 14 22:17:29.833: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 36.616017ms) May 14 22:17:29.850: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 17.427223ms) May 14 22:17:29.854: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.774654ms) May 14 22:17:29.857: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.222873ms) May 14 22:17:29.860: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.704846ms) May 14 22:17:29.863: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.799053ms) May 14 22:17:29.866: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.070862ms) May 14 22:17:29.868: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.516988ms) May 14 22:17:29.871: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.762089ms) May 14 22:17:29.874: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.762101ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:17:29.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3370" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":228,"skipped":3862,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:17:29.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 14 22:17:29.944: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:17:43.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5285" for this suite. • [SLOW TEST:13.753 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":229,"skipped":3909,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:17:43.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 14 22:17:51.810: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 22:17:51.820: INFO: Pod pod-with-poststart-http-hook still exists May 14 22:17:53.820: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 22:17:53.823: INFO: Pod pod-with-poststart-http-hook still exists May 14 22:17:55.820: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 22:17:55.824: INFO: Pod pod-with-poststart-http-hook still exists May 14 22:17:57.820: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 22:17:57.823: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:17:57.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2571" for this suite. • [SLOW TEST:14.177 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3914,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:17:57.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:18:02.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3227" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":231,"skipped":3918,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:18:02.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0514 22:18:12.419296 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 22:18:12.419: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:18:12.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3124" for this suite. • [SLOW TEST:10.277 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":232,"skipped":3933,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:18:12.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 14 22:18:12.518: INFO: Waiting up to 5m0s for pod "var-expansion-d2bd8fa5-d9ca-4d9a-912d-c2a1658c3202" in namespace "var-expansion-1721" to be "success or failure" May 14 22:18:12.521: INFO: Pod "var-expansion-d2bd8fa5-d9ca-4d9a-912d-c2a1658c3202": Phase="Pending", Reason="", readiness=false. Elapsed: 3.40897ms May 14 22:18:14.526: INFO: Pod "var-expansion-d2bd8fa5-d9ca-4d9a-912d-c2a1658c3202": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007647074s May 14 22:18:16.530: INFO: Pod "var-expansion-d2bd8fa5-d9ca-4d9a-912d-c2a1658c3202": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011792519s STEP: Saw pod success May 14 22:18:16.530: INFO: Pod "var-expansion-d2bd8fa5-d9ca-4d9a-912d-c2a1658c3202" satisfied condition "success or failure" May 14 22:18:16.532: INFO: Trying to get logs from node jerma-worker pod var-expansion-d2bd8fa5-d9ca-4d9a-912d-c2a1658c3202 container dapi-container: STEP: delete the pod May 14 22:18:16.552: INFO: Waiting for pod var-expansion-d2bd8fa5-d9ca-4d9a-912d-c2a1658c3202 to disappear May 14 22:18:16.563: INFO: Pod var-expansion-d2bd8fa5-d9ca-4d9a-912d-c2a1658c3202 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:18:16.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1721" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3941,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:18:16.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 14 22:18:16.685: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:18:25.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-880" for this suite. • [SLOW TEST:9.327 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":234,"skipped":3960,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:18:25.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 14 22:18:30.560: INFO: Successfully updated pod "pod-update-activedeadlineseconds-948817a9-5d76-4f82-b96d-c8b7309e300e" May 14 22:18:30.560: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-948817a9-5d76-4f82-b96d-c8b7309e300e" in namespace "pods-3968" to be "terminated due to deadline exceeded" May 14 22:18:30.585: INFO: Pod "pod-update-activedeadlineseconds-948817a9-5d76-4f82-b96d-c8b7309e300e": Phase="Running", Reason="", readiness=true. Elapsed: 25.240574ms May 14 22:18:32.754: INFO: Pod "pod-update-activedeadlineseconds-948817a9-5d76-4f82-b96d-c8b7309e300e": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.193687184s May 14 22:18:32.754: INFO: Pod "pod-update-activedeadlineseconds-948817a9-5d76-4f82-b96d-c8b7309e300e" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:18:32.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3968" for this suite. • [SLOW TEST:6.842 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3996,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:18:32.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 14 22:18:33.075: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1bc18fb8-083e-4400-93f2-c9b2cd4e49b5" in namespace "projected-9023" to be "success or failure" May 14 22:18:33.078: INFO: Pod "downwardapi-volume-1bc18fb8-083e-4400-93f2-c9b2cd4e49b5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.547279ms May 14 22:18:35.145: INFO: Pod "downwardapi-volume-1bc18fb8-083e-4400-93f2-c9b2cd4e49b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070080875s May 14 22:18:37.196: INFO: Pod "downwardapi-volume-1bc18fb8-083e-4400-93f2-c9b2cd4e49b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121433044s STEP: Saw pod success May 14 22:18:37.196: INFO: Pod "downwardapi-volume-1bc18fb8-083e-4400-93f2-c9b2cd4e49b5" satisfied condition "success or failure" May 14 22:18:37.199: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1bc18fb8-083e-4400-93f2-c9b2cd4e49b5 container client-container: STEP: delete the pod May 14 22:18:37.349: INFO: Waiting for pod downwardapi-volume-1bc18fb8-083e-4400-93f2-c9b2cd4e49b5 to disappear May 14 22:18:37.499: INFO: Pod downwardapi-volume-1bc18fb8-083e-4400-93f2-c9b2cd4e49b5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:18:37.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9023" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":4006,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:18:37.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-d77428ad-ed4f-4aef-9509-852b7df20a61 STEP: Creating a pod to test consume secrets May 14 22:18:37.770: INFO: Waiting up to 5m0s for pod "pod-secrets-cec08de2-4f99-4b74-9a28-c6aedf34baef" in namespace "secrets-77" to be "success or failure" May 14 22:18:37.798: INFO: Pod "pod-secrets-cec08de2-4f99-4b74-9a28-c6aedf34baef": Phase="Pending", Reason="", readiness=false. Elapsed: 28.147692ms May 14 22:18:39.802: INFO: Pod "pod-secrets-cec08de2-4f99-4b74-9a28-c6aedf34baef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03158687s May 14 22:18:41.805: INFO: Pod "pod-secrets-cec08de2-4f99-4b74-9a28-c6aedf34baef": Phase="Running", Reason="", readiness=true. Elapsed: 4.035256745s May 14 22:18:43.809: INFO: Pod "pod-secrets-cec08de2-4f99-4b74-9a28-c6aedf34baef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039106366s STEP: Saw pod success May 14 22:18:43.809: INFO: Pod "pod-secrets-cec08de2-4f99-4b74-9a28-c6aedf34baef" satisfied condition "success or failure" May 14 22:18:43.812: INFO: Trying to get logs from node jerma-worker pod pod-secrets-cec08de2-4f99-4b74-9a28-c6aedf34baef container secret-volume-test: STEP: delete the pod May 14 22:18:43.916: INFO: Waiting for pod pod-secrets-cec08de2-4f99-4b74-9a28-c6aedf34baef to disappear May 14 22:18:43.922: INFO: Pod pod-secrets-cec08de2-4f99-4b74-9a28-c6aedf34baef no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:18:43.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-77" for this suite. • [SLOW TEST:6.344 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":4008,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:18:43.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 14 22:18:44.010: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 14 22:18:44.659: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 14 22:18:47.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091524, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091524, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091524, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091524, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 22:18:49.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091524, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091524, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091524, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091524, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 22:18:52.241: INFO: Waited 627.998183ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:18:52.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-1839" for this suite. • [SLOW TEST:8.840 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":238,"skipped":4021,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:18:52.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 22:18:53.238: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:18:54.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1494" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":239,"skipped":4034,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:18:54.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 14 22:18:54.412: INFO: Waiting up to 5m0s for pod "client-containers-4a1288b4-06d9-43a7-ae9e-099a68e6db96" in namespace "containers-466" to be "success or failure" May 14 22:18:54.415: INFO: Pod "client-containers-4a1288b4-06d9-43a7-ae9e-099a68e6db96": Phase="Pending", Reason="", readiness=false. Elapsed: 3.596115ms May 14 22:18:56.420: INFO: Pod "client-containers-4a1288b4-06d9-43a7-ae9e-099a68e6db96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007925372s May 14 22:18:58.424: INFO: Pod "client-containers-4a1288b4-06d9-43a7-ae9e-099a68e6db96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012070677s STEP: Saw pod success May 14 22:18:58.424: INFO: Pod "client-containers-4a1288b4-06d9-43a7-ae9e-099a68e6db96" satisfied condition "success or failure" May 14 22:18:58.427: INFO: Trying to get logs from node jerma-worker pod client-containers-4a1288b4-06d9-43a7-ae9e-099a68e6db96 container test-container: STEP: delete the pod May 14 22:18:58.447: INFO: Waiting for pod client-containers-4a1288b4-06d9-43a7-ae9e-099a68e6db96 to disappear May 14 22:18:58.451: INFO: Pod client-containers-4a1288b4-06d9-43a7-ae9e-099a68e6db96 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:18:58.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-466" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":4064,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:18:58.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 14 22:18:58.616: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 14 22:18:58.626: INFO: Waiting for terminating namespaces to be deleted... May 14 22:18:58.628: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 14 22:18:58.632: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 22:18:58.632: INFO: Container kindnet-cni ready: true, restart count 0 May 14 22:18:58.633: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 22:18:58.633: INFO: Container kube-proxy ready: true, restart count 0 May 14 22:18:58.633: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 14 22:18:58.638: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 22:18:58.638: INFO: Container kindnet-cni ready: true, restart count 0 May 14 22:18:58.638: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 14 22:18:58.638: INFO: Container kube-bench ready: false, restart count 0 May 14 22:18:58.638: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 14 22:18:58.638: INFO: Container kube-proxy ready: true, restart count 0 May 14 22:18:58.638: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 14 22:18:58.638: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-aeeba6e5-52ae-4641-b12e-d9d32b18196e 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-aeeba6e5-52ae-4641-b12e-d9d32b18196e off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-aeeba6e5-52ae-4641-b12e-d9d32b18196e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:24:06.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-704" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.508 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":241,"skipped":4068,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:24:06.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-08524726-db7d-4660-80e4-fe8f3d801a6f STEP: Creating a pod to test consume configMaps May 14 22:24:07.155: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1fd66f61-284b-4783-99d9-7375dcdefe3a" in namespace "projected-713" to be "success or failure" May 14 22:24:07.249: INFO: Pod "pod-projected-configmaps-1fd66f61-284b-4783-99d9-7375dcdefe3a": Phase="Pending", Reason="", readiness=false. Elapsed: 93.824604ms May 14 22:24:09.252: INFO: Pod "pod-projected-configmaps-1fd66f61-284b-4783-99d9-7375dcdefe3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097048574s May 14 22:24:11.256: INFO: Pod "pod-projected-configmaps-1fd66f61-284b-4783-99d9-7375dcdefe3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100939685s STEP: Saw pod success May 14 22:24:11.256: INFO: Pod "pod-projected-configmaps-1fd66f61-284b-4783-99d9-7375dcdefe3a" satisfied condition "success or failure" May 14 22:24:11.259: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-1fd66f61-284b-4783-99d9-7375dcdefe3a container projected-configmap-volume-test: STEP: delete the pod May 14 22:24:11.299: INFO: Waiting for pod pod-projected-configmaps-1fd66f61-284b-4783-99d9-7375dcdefe3a to disappear May 14 22:24:11.314: INFO: Pod pod-projected-configmaps-1fd66f61-284b-4783-99d9-7375dcdefe3a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:24:11.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-713" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":4140,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:24:11.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 14 22:24:11.500: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed600e88-7891-4c1b-847a-d6d7799032c1" in namespace "projected-9427" to be "success or failure" May 14 22:24:11.548: INFO: Pod "downwardapi-volume-ed600e88-7891-4c1b-847a-d6d7799032c1": Phase="Pending", Reason="", readiness=false. Elapsed: 48.046636ms May 14 22:24:13.633: INFO: Pod "downwardapi-volume-ed600e88-7891-4c1b-847a-d6d7799032c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132784593s May 14 22:24:15.636: INFO: Pod "downwardapi-volume-ed600e88-7891-4c1b-847a-d6d7799032c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.136196667s STEP: Saw pod success May 14 22:24:15.637: INFO: Pod "downwardapi-volume-ed600e88-7891-4c1b-847a-d6d7799032c1" satisfied condition "success or failure" May 14 22:24:15.638: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ed600e88-7891-4c1b-847a-d6d7799032c1 container client-container: STEP: delete the pod May 14 22:24:15.807: INFO: Waiting for pod downwardapi-volume-ed600e88-7891-4c1b-847a-d6d7799032c1 to disappear May 14 22:24:15.902: INFO: Pod downwardapi-volume-ed600e88-7891-4c1b-847a-d6d7799032c1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:24:15.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9427" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":4142,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:24:15.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-1620904b-4e7b-4184-b801-30ab0d114c7b STEP: Creating a pod to test consume secrets May 14 22:24:16.057: INFO: Waiting up to 5m0s for pod "pod-secrets-f4cc7f3f-787d-467a-a355-e4537e800013" in namespace "secrets-7901" to be "success or failure" May 14 22:24:16.098: INFO: Pod "pod-secrets-f4cc7f3f-787d-467a-a355-e4537e800013": Phase="Pending", Reason="", readiness=false. Elapsed: 40.729513ms May 14 22:24:18.102: INFO: Pod "pod-secrets-f4cc7f3f-787d-467a-a355-e4537e800013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044440027s May 14 22:24:20.107: INFO: Pod "pod-secrets-f4cc7f3f-787d-467a-a355-e4537e800013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049604355s STEP: Saw pod success May 14 22:24:20.107: INFO: Pod "pod-secrets-f4cc7f3f-787d-467a-a355-e4537e800013" satisfied condition "success or failure" May 14 22:24:20.114: INFO: Trying to get logs from node jerma-worker pod pod-secrets-f4cc7f3f-787d-467a-a355-e4537e800013 container secret-volume-test: STEP: delete the pod May 14 22:24:20.145: INFO: Waiting for pod pod-secrets-f4cc7f3f-787d-467a-a355-e4537e800013 to disappear May 14 22:24:20.162: INFO: Pod pod-secrets-f4cc7f3f-787d-467a-a355-e4537e800013 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:24:20.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7901" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4154,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:24:20.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:24:20.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3458" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":245,"skipped":4171,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:24:20.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 22:24:20.978: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 22:24:22.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091860, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091860, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091861, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091860, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 22:24:24.993: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091860, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091860, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091861, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725091860, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 22:24:28.033: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:24:28.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9243" for this suite. STEP: Destroying namespace "webhook-9243-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.823 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":246,"skipped":4191,"failed":0} [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:24:28.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5529 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5529 STEP: creating replication controller externalsvc in namespace services-5529 I0514 22:24:28.663503 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5529, replica count: 2 I0514 22:24:31.713872 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 22:24:34.714100 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 14 22:24:34.791: INFO: Creating new exec pod May 14 22:24:38.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5529 execpodd7qrl -- /bin/sh -x -c nslookup nodeport-service' May 14 22:24:42.470: INFO: stderr: "I0514 22:24:42.399608 4242 log.go:172] (0xc00081ab00) (0xc0003f75e0) Create stream\nI0514 22:24:42.399638 4242 log.go:172] (0xc00081ab00) (0xc0003f75e0) Stream added, broadcasting: 1\nI0514 22:24:42.401500 4242 log.go:172] (0xc00081ab00) Reply frame received for 1\nI0514 22:24:42.401526 4242 log.go:172] (0xc00081ab00) (0xc0003f7680) Create stream\nI0514 22:24:42.401537 4242 log.go:172] (0xc00081ab00) (0xc0003f7680) Stream added, broadcasting: 3\nI0514 22:24:42.402352 4242 log.go:172] (0xc00081ab00) Reply frame received for 3\nI0514 22:24:42.402420 4242 log.go:172] (0xc00081ab00) (0xc0007dc000) Create stream\nI0514 22:24:42.402434 4242 log.go:172] (0xc00081ab00) (0xc0007dc000) Stream added, broadcasting: 5\nI0514 22:24:42.403233 4242 log.go:172] (0xc00081ab00) Reply frame received for 5\nI0514 22:24:42.454281 4242 log.go:172] (0xc00081ab00) Data frame received for 5\nI0514 22:24:42.454311 4242 log.go:172] (0xc0007dc000) (5) Data frame handling\nI0514 22:24:42.454337 4242 log.go:172] (0xc0007dc000) (5) Data frame sent\n+ nslookup nodeport-service\nI0514 22:24:42.461805 4242 log.go:172] (0xc00081ab00) Data frame received for 3\nI0514 22:24:42.461830 4242 log.go:172] (0xc0003f7680) (3) Data frame handling\nI0514 22:24:42.461862 4242 log.go:172] (0xc0003f7680) (3) Data frame sent\nI0514 22:24:42.462835 4242 log.go:172] (0xc00081ab00) Data frame received for 3\nI0514 22:24:42.462863 4242 log.go:172] (0xc0003f7680) (3) Data frame handling\nI0514 22:24:42.462887 4242 log.go:172] (0xc0003f7680) (3) Data frame sent\nI0514 22:24:42.463739 4242 log.go:172] (0xc00081ab00) Data frame received for 3\nI0514 22:24:42.463764 4242 log.go:172] (0xc0003f7680) (3) Data frame handling\nI0514 22:24:42.463830 4242 log.go:172] (0xc00081ab00) Data frame received for 5\nI0514 22:24:42.463853 4242 log.go:172] (0xc0007dc000) (5) Data frame handling\nI0514 22:24:42.465628 4242 log.go:172] (0xc00081ab00) Data frame received for 1\nI0514 22:24:42.465657 4242 log.go:172] (0xc0003f75e0) (1) Data frame handling\nI0514 22:24:42.465682 4242 log.go:172] (0xc0003f75e0) (1) Data frame sent\nI0514 22:24:42.465821 4242 log.go:172] (0xc00081ab00) (0xc0003f75e0) Stream removed, broadcasting: 1\nI0514 22:24:42.465869 4242 log.go:172] (0xc00081ab00) Go away received\nI0514 22:24:42.466213 4242 log.go:172] (0xc00081ab00) (0xc0003f75e0) Stream removed, broadcasting: 1\nI0514 22:24:42.466230 4242 log.go:172] (0xc00081ab00) (0xc0003f7680) Stream removed, broadcasting: 3\nI0514 22:24:42.466243 4242 log.go:172] (0xc00081ab00) (0xc0007dc000) Stream removed, broadcasting: 5\n" May 14 22:24:42.471: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5529.svc.cluster.local\tcanonical name = externalsvc.services-5529.svc.cluster.local.\nName:\texternalsvc.services-5529.svc.cluster.local\nAddress: 10.99.252.77\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5529, will wait for the garbage collector to delete the pods May 14 22:24:42.528: INFO: Deleting ReplicationController externalsvc took: 4.522754ms May 14 22:24:42.828: INFO: Terminating ReplicationController externalsvc pods took: 300.214566ms May 14 22:24:59.373: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:24:59.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5529" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:31.188 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":247,"skipped":4191,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:24:59.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:25:17.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5661" for this suite. • [SLOW TEST:18.076 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":248,"skipped":4206,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:25:17.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5537 STEP: creating a selector STEP: Creating the service pods in kubernetes May 14 22:25:17.540: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 14 22:25:45.783: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.193 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5537 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 22:25:45.783: INFO: >>> kubeConfig: /root/.kube/config I0514 22:25:45.815938 6 log.go:172] (0xc0023ae4d0) (0xc0026cdc20) Create stream I0514 22:25:45.815966 6 log.go:172] (0xc0023ae4d0) (0xc0026cdc20) Stream added, broadcasting: 1 I0514 22:25:45.824320 6 log.go:172] (0xc0023ae4d0) Reply frame received for 1 I0514 22:25:45.824388 6 log.go:172] (0xc0023ae4d0) (0xc001dfa6e0) Create stream I0514 22:25:45.824407 6 log.go:172] (0xc0023ae4d0) (0xc001dfa6e0) Stream added, broadcasting: 3 I0514 22:25:45.827049 6 log.go:172] (0xc0023ae4d0) Reply frame received for 3 I0514 22:25:45.827089 6 log.go:172] (0xc0023ae4d0) (0xc001dfa820) Create stream I0514 22:25:45.827105 6 log.go:172] (0xc0023ae4d0) (0xc001dfa820) Stream added, broadcasting: 5 I0514 22:25:45.828010 6 log.go:172] (0xc0023ae4d0) Reply frame received for 5 I0514 22:25:46.941644 6 log.go:172] (0xc0023ae4d0) Data frame received for 3 I0514 22:25:46.941663 6 log.go:172] (0xc001dfa6e0) (3) Data frame handling I0514 22:25:46.941670 6 log.go:172] (0xc001dfa6e0) (3) Data frame sent I0514 22:25:46.942620 6 log.go:172] (0xc0023ae4d0) Data frame received for 3 I0514 22:25:46.942657 6 log.go:172] (0xc001dfa6e0) (3) Data frame handling I0514 22:25:46.942702 6 log.go:172] (0xc0023ae4d0) Data frame received for 5 I0514 22:25:46.942727 6 log.go:172] (0xc001dfa820) (5) Data frame handling I0514 22:25:46.944140 6 log.go:172] (0xc0023ae4d0) Data frame received for 1 I0514 22:25:46.944173 6 log.go:172] (0xc0026cdc20) (1) Data frame handling I0514 22:25:46.944182 6 log.go:172] (0xc0026cdc20) (1) Data frame sent I0514 22:25:46.944193 6 log.go:172] (0xc0023ae4d0) (0xc0026cdc20) Stream removed, broadcasting: 1 I0514 22:25:46.944251 6 log.go:172] (0xc0023ae4d0) (0xc0026cdc20) Stream removed, broadcasting: 1 I0514 22:25:46.944259 6 log.go:172] (0xc0023ae4d0) (0xc001dfa6e0) Stream removed, broadcasting: 3 I0514 22:25:46.944267 6 log.go:172] (0xc0023ae4d0) (0xc001dfa820) Stream removed, broadcasting: 5 May 14 22:25:46.944: INFO: Found all expected endpoints: [netserver-0] I0514 22:25:46.944510 6 log.go:172] (0xc0023ae4d0) Go away received May 14 22:25:46.948: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.34 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5537 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 22:25:46.948: INFO: >>> kubeConfig: /root/.kube/config I0514 22:25:46.971100 6 log.go:172] (0xc002794000) (0xc0013b4140) Create stream I0514 22:25:46.971124 6 log.go:172] (0xc002794000) (0xc0013b4140) Stream added, broadcasting: 1 I0514 22:25:46.972751 6 log.go:172] (0xc002794000) Reply frame received for 1 I0514 22:25:46.972777 6 log.go:172] (0xc002794000) (0xc00159c000) Create stream I0514 22:25:46.972787 6 log.go:172] (0xc002794000) (0xc00159c000) Stream added, broadcasting: 3 I0514 22:25:46.973637 6 log.go:172] (0xc002794000) Reply frame received for 3 I0514 22:25:46.973664 6 log.go:172] (0xc002794000) (0xc0013b4280) Create stream I0514 22:25:46.973673 6 log.go:172] (0xc002794000) (0xc0013b4280) Stream added, broadcasting: 5 I0514 22:25:46.974336 6 log.go:172] (0xc002794000) Reply frame received for 5 I0514 22:25:48.050999 6 log.go:172] (0xc002794000) Data frame received for 5 I0514 22:25:48.051038 6 log.go:172] (0xc0013b4280) (5) Data frame handling I0514 22:25:48.051077 6 log.go:172] (0xc002794000) Data frame received for 3 I0514 22:25:48.051102 6 log.go:172] (0xc00159c000) (3) Data frame handling I0514 22:25:48.051135 6 log.go:172] (0xc00159c000) (3) Data frame sent I0514 22:25:48.051151 6 log.go:172] (0xc002794000) Data frame received for 3 I0514 22:25:48.051163 6 log.go:172] (0xc00159c000) (3) Data frame handling I0514 22:25:48.053709 6 log.go:172] (0xc002794000) Data frame received for 1 I0514 22:25:48.053726 6 log.go:172] (0xc0013b4140) (1) Data frame handling I0514 22:25:48.053736 6 log.go:172] (0xc0013b4140) (1) Data frame sent I0514 22:25:48.053747 6 log.go:172] (0xc002794000) (0xc0013b4140) Stream removed, broadcasting: 1 I0514 22:25:48.053819 6 log.go:172] (0xc002794000) Go away received I0514 22:25:48.053852 6 log.go:172] (0xc002794000) (0xc0013b4140) Stream removed, broadcasting: 1 I0514 22:25:48.053876 6 log.go:172] (0xc002794000) (0xc00159c000) Stream removed, broadcasting: 3 I0514 22:25:48.053890 6 log.go:172] (0xc002794000) (0xc0013b4280) Stream removed, broadcasting: 5 May 14 22:25:48.053: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:25:48.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5537" for this suite. • [SLOW TEST:30.564 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4218,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:25:48.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 14 22:25:48.154: INFO: Waiting up to 5m0s for pod "pod-5e38299c-367d-4841-8bed-8a27481ac326" in namespace "emptydir-3109" to be "success or failure" May 14 22:25:48.172: INFO: Pod "pod-5e38299c-367d-4841-8bed-8a27481ac326": Phase="Pending", Reason="", readiness=false. Elapsed: 17.218114ms May 14 22:25:50.176: INFO: Pod "pod-5e38299c-367d-4841-8bed-8a27481ac326": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021552794s May 14 22:25:52.180: INFO: Pod "pod-5e38299c-367d-4841-8bed-8a27481ac326": Phase="Running", Reason="", readiness=true. Elapsed: 4.025722509s May 14 22:25:54.202: INFO: Pod "pod-5e38299c-367d-4841-8bed-8a27481ac326": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04778887s STEP: Saw pod success May 14 22:25:54.202: INFO: Pod "pod-5e38299c-367d-4841-8bed-8a27481ac326" satisfied condition "success or failure" May 14 22:25:54.205: INFO: Trying to get logs from node jerma-worker pod pod-5e38299c-367d-4841-8bed-8a27481ac326 container test-container: STEP: delete the pod May 14 22:25:54.541: INFO: Waiting for pod pod-5e38299c-367d-4841-8bed-8a27481ac326 to disappear May 14 22:25:54.671: INFO: Pod pod-5e38299c-367d-4841-8bed-8a27481ac326 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:25:54.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3109" for this suite. • [SLOW TEST:6.631 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4233,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:25:54.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-107eac20-79f9-4955-87f2-8fa88f8d6f4c STEP: Creating a pod to test consume secrets May 14 22:25:55.318: INFO: Waiting up to 5m0s for pod "pod-secrets-dd0e0c5d-2133-481f-96be-a0abdc585904" in namespace "secrets-3263" to be "success or failure" May 14 22:25:55.351: INFO: Pod "pod-secrets-dd0e0c5d-2133-481f-96be-a0abdc585904": Phase="Pending", Reason="", readiness=false. Elapsed: 33.26031ms May 14 22:25:57.504: INFO: Pod "pod-secrets-dd0e0c5d-2133-481f-96be-a0abdc585904": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18653242s May 14 22:25:59.544: INFO: Pod "pod-secrets-dd0e0c5d-2133-481f-96be-a0abdc585904": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.226482272s STEP: Saw pod success May 14 22:25:59.544: INFO: Pod "pod-secrets-dd0e0c5d-2133-481f-96be-a0abdc585904" satisfied condition "success or failure" May 14 22:25:59.548: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-dd0e0c5d-2133-481f-96be-a0abdc585904 container secret-volume-test: STEP: delete the pod May 14 22:25:59.631: INFO: Waiting for pod pod-secrets-dd0e0c5d-2133-481f-96be-a0abdc585904 to disappear May 14 22:25:59.748: INFO: Pod pod-secrets-dd0e0c5d-2133-481f-96be-a0abdc585904 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:25:59.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3263" for this suite. • [SLOW TEST:5.064 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4235,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:25:59.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:26:05.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3863" for this suite. • [SLOW TEST:5.578 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":252,"skipped":4248,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:26:05.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 22:26:05.425: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:26:12.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2017" for this suite. • [SLOW TEST:6.841 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":253,"skipped":4256,"failed":0} [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:26:12.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0514 22:26:13.456131 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 22:26:13.456: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:26:13.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3223" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":254,"skipped":4256,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:26:13.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-7cd941eb-2631-427e-8fc9-5b834339c045 STEP: Creating secret with name s-test-opt-upd-b7d08e85-08cf-4f6c-a004-d4a5f939cd53 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-7cd941eb-2631-427e-8fc9-5b834339c045 STEP: Updating secret s-test-opt-upd-b7d08e85-08cf-4f6c-a004-d4a5f939cd53 STEP: Creating secret with name s-test-opt-create-5aeb8a0c-5d28-4a99-9a84-0b90c874214e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:27:34.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8433" for this suite. • [SLOW TEST:80.612 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4265,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:27:34.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-5135b0d6-1863-46f7-b7c4-229d972a1b0d STEP: Creating a pod to test consume configMaps May 14 22:27:34.179: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6319922d-c89b-4737-af98-39b40272e43c" in namespace "projected-2732" to be "success or failure" May 14 22:27:34.182: INFO: Pod "pod-projected-configmaps-6319922d-c89b-4737-af98-39b40272e43c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38573ms May 14 22:27:36.294: INFO: Pod "pod-projected-configmaps-6319922d-c89b-4737-af98-39b40272e43c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115183938s May 14 22:27:38.298: INFO: Pod "pod-projected-configmaps-6319922d-c89b-4737-af98-39b40272e43c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118701204s STEP: Saw pod success May 14 22:27:38.298: INFO: Pod "pod-projected-configmaps-6319922d-c89b-4737-af98-39b40272e43c" satisfied condition "success or failure" May 14 22:27:38.300: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-6319922d-c89b-4737-af98-39b40272e43c container projected-configmap-volume-test: STEP: delete the pod May 14 22:27:38.380: INFO: Waiting for pod pod-projected-configmaps-6319922d-c89b-4737-af98-39b40272e43c to disappear May 14 22:27:38.427: INFO: Pod pod-projected-configmaps-6319922d-c89b-4737-af98-39b40272e43c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:27:38.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2732" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4280,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:27:38.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 14 22:27:38.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-3342' May 14 22:27:38.655: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 14 22:27:38.655: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 14 22:27:40.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3342' May 14 22:27:41.064: INFO: stderr: "" May 14 22:27:41.064: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:27:41.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3342" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":257,"skipped":4310,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:27:41.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 14 22:27:41.712: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5bd38a6c-beba-46ba-8897-e102537667f7" in namespace "downward-api-7092" to be "success or failure" May 14 22:27:41.765: INFO: Pod "downwardapi-volume-5bd38a6c-beba-46ba-8897-e102537667f7": Phase="Pending", Reason="", readiness=false. Elapsed: 52.689005ms May 14 22:27:44.011: INFO: Pod "downwardapi-volume-5bd38a6c-beba-46ba-8897-e102537667f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.298941349s May 14 22:27:46.015: INFO: Pod "downwardapi-volume-5bd38a6c-beba-46ba-8897-e102537667f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.302760682s STEP: Saw pod success May 14 22:27:46.015: INFO: Pod "downwardapi-volume-5bd38a6c-beba-46ba-8897-e102537667f7" satisfied condition "success or failure" May 14 22:27:46.018: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-5bd38a6c-beba-46ba-8897-e102537667f7 container client-container: STEP: delete the pod May 14 22:27:46.241: INFO: Waiting for pod downwardapi-volume-5bd38a6c-beba-46ba-8897-e102537667f7 to disappear May 14 22:27:46.492: INFO: Pod downwardapi-volume-5bd38a6c-beba-46ba-8897-e102537667f7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:27:46.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7092" for this suite. • [SLOW TEST:5.497 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4323,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:27:46.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 14 22:27:46.704: INFO: Waiting up to 5m0s for pod "downward-api-176cbfeb-8b85-4d9f-9d3a-ff498fa487a4" in namespace "downward-api-6512" to be "success or failure" May 14 22:27:46.762: INFO: Pod "downward-api-176cbfeb-8b85-4d9f-9d3a-ff498fa487a4": Phase="Pending", Reason="", readiness=false. Elapsed: 57.860733ms May 14 22:27:48.766: INFO: Pod "downward-api-176cbfeb-8b85-4d9f-9d3a-ff498fa487a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061734493s May 14 22:27:50.770: INFO: Pod "downward-api-176cbfeb-8b85-4d9f-9d3a-ff498fa487a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06611468s STEP: Saw pod success May 14 22:27:50.770: INFO: Pod "downward-api-176cbfeb-8b85-4d9f-9d3a-ff498fa487a4" satisfied condition "success or failure" May 14 22:27:50.773: INFO: Trying to get logs from node jerma-worker pod downward-api-176cbfeb-8b85-4d9f-9d3a-ff498fa487a4 container dapi-container: STEP: delete the pod May 14 22:27:50.791: INFO: Waiting for pod downward-api-176cbfeb-8b85-4d9f-9d3a-ff498fa487a4 to disappear May 14 22:27:50.810: INFO: Pod downward-api-176cbfeb-8b85-4d9f-9d3a-ff498fa487a4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:27:50.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6512" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4342,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:27:50.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:27:50.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6787" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":260,"skipped":4350,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:27:50.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 14 22:27:51.031: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3907c1b-e90a-4f6e-83eb-15512e6a273f" in namespace "projected-1880" to be "success or failure" May 14 22:27:51.033: INFO: Pod "downwardapi-volume-b3907c1b-e90a-4f6e-83eb-15512e6a273f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.597885ms May 14 22:27:53.061: INFO: Pod "downwardapi-volume-b3907c1b-e90a-4f6e-83eb-15512e6a273f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030098035s May 14 22:27:55.065: INFO: Pod "downwardapi-volume-b3907c1b-e90a-4f6e-83eb-15512e6a273f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034568807s STEP: Saw pod success May 14 22:27:55.065: INFO: Pod "downwardapi-volume-b3907c1b-e90a-4f6e-83eb-15512e6a273f" satisfied condition "success or failure" May 14 22:27:55.068: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b3907c1b-e90a-4f6e-83eb-15512e6a273f container client-container: STEP: delete the pod May 14 22:27:55.090: INFO: Waiting for pod downwardapi-volume-b3907c1b-e90a-4f6e-83eb-15512e6a273f to disappear May 14 22:27:55.094: INFO: Pod downwardapi-volume-b3907c1b-e90a-4f6e-83eb-15512e6a273f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:27:55.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1880" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4356,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:27:55.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:27:55.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8556" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":262,"skipped":4359,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:27:55.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 22:27:55.490: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 14 22:28:00.510: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 14 22:28:00.510: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 14 22:28:02.514: INFO: Creating deployment "test-rollover-deployment" May 14 22:28:02.611: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 14 22:28:04.619: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 14 22:28:04.635: INFO: Ensure that both replica sets have 1 created replica May 14 22:28:04.640: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 14 22:28:04.647: INFO: Updating deployment test-rollover-deployment May 14 22:28:04.647: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 14 22:28:06.803: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 14 22:28:06.807: INFO: Make sure deployment "test-rollover-deployment" is complete May 14 22:28:06.812: INFO: all replica sets need to contain the pod-template-hash label May 14 22:28:06.812: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092082, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092082, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092084, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092082, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 22:28:08.818: INFO: all replica sets need to contain the pod-template-hash label May 14 22:28:08.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092082, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092082, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092087, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092082, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 22:28:10.818: INFO: all replica sets need to contain the pod-template-hash label May 14 22:28:10.818: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092082, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092082, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092087, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092082, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 22:28:12.819: INFO: all replica sets need to contain the pod-template-hash label May 14 22:28:12.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092082, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092082, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092087, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092082, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 22:28:14.819: INFO: all replica sets need to contain the pod-template-hash label May 14 22:28:14.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092082, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092082, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092087, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092082, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 22:28:16.820: INFO: all replica sets need to contain the pod-template-hash label May 14 22:28:16.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092082, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092082, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092087, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092082, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 22:28:18.837: INFO: May 14 22:28:18.837: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 14 22:28:18.844: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9265 /apis/apps/v1/namespaces/deployment-9265/deployments/test-rollover-deployment b77f6098-697a-498b-a54d-643cacce48d2 16226239 2 2020-05-14 22:28:02 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001fcc458 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-14 22:28:02 +0000 UTC,LastTransitionTime:2020-05-14 22:28:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-14 22:28:17 +0000 UTC,LastTransitionTime:2020-05-14 22:28:02 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 14 22:28:18.847: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-9265 /apis/apps/v1/namespaces/deployment-9265/replicasets/test-rollover-deployment-574d6dfbff 25489c5e-7de3-4032-bb27-0cc0b7771961 16226229 2 2020-05-14 22:28:04 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment b77f6098-697a-498b-a54d-643cacce48d2 0xc001fccdd7 0xc001fccdd8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001fccec8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 14 22:28:18.847: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 14 22:28:18.847: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9265 /apis/apps/v1/namespaces/deployment-9265/replicasets/test-rollover-controller c128232e-2924-467b-9ab4-199e8c3b40ba 16226238 2 2020-05-14 22:27:55 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment b77f6098-697a-498b-a54d-643cacce48d2 0xc001fccc37 0xc001fccc38}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001fccd08 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 14 22:28:18.847: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-9265 /apis/apps/v1/namespaces/deployment-9265/replicasets/test-rollover-deployment-f6c94f66c 848cb07a-185c-4d1d-8d07-fd8f969fe3bd 16226181 2 2020-05-14 22:28:02 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment b77f6098-697a-498b-a54d-643cacce48d2 0xc001fccfa0 0xc001fccfa1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001fcd088 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 14 22:28:18.850: INFO: Pod "test-rollover-deployment-574d6dfbff-wsw7l" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-wsw7l test-rollover-deployment-574d6dfbff- deployment-9265 /api/v1/namespaces/deployment-9265/pods/test-rollover-deployment-574d6dfbff-wsw7l 5913b358-833c-4bdd-9666-6582352880ea 16226195 0 2020-05-14 22:28:04 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 25489c5e-7de3-4032-bb27-0cc0b7771961 0xc001fcd897 0xc001fcd898}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-brjdb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-brjdb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-brjdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:28:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:28:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:28:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 22:28:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.41,StartTime:2020-05-14 22:28:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 22:28:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://2cb13f51ae5df115726283ba38d6fca797121a566da931790d7020ab0e2c5915,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:28:18.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9265" for this suite. • [SLOW TEST:23.617 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":263,"skipped":4369,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:28:18.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-9wb6 STEP: Creating a pod to test atomic-volume-subpath May 14 22:28:19.128: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-9wb6" in namespace "subpath-9587" to be "success or failure" May 14 22:28:19.137: INFO: Pod "pod-subpath-test-projected-9wb6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.379726ms May 14 22:28:21.143: INFO: Pod "pod-subpath-test-projected-9wb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015076158s May 14 22:28:23.147: INFO: Pod "pod-subpath-test-projected-9wb6": Phase="Running", Reason="", readiness=true. Elapsed: 4.018629129s May 14 22:28:25.151: INFO: Pod "pod-subpath-test-projected-9wb6": Phase="Running", Reason="", readiness=true. Elapsed: 6.02288586s May 14 22:28:27.156: INFO: Pod "pod-subpath-test-projected-9wb6": Phase="Running", Reason="", readiness=true. Elapsed: 8.027556972s May 14 22:28:29.160: INFO: Pod "pod-subpath-test-projected-9wb6": Phase="Running", Reason="", readiness=true. Elapsed: 10.032436032s May 14 22:28:31.165: INFO: Pod "pod-subpath-test-projected-9wb6": Phase="Running", Reason="", readiness=true. Elapsed: 12.037417407s May 14 22:28:33.169: INFO: Pod "pod-subpath-test-projected-9wb6": Phase="Running", Reason="", readiness=true. Elapsed: 14.041058473s May 14 22:28:35.211: INFO: Pod "pod-subpath-test-projected-9wb6": Phase="Running", Reason="", readiness=true. Elapsed: 16.082711032s May 14 22:28:37.215: INFO: Pod "pod-subpath-test-projected-9wb6": Phase="Running", Reason="", readiness=true. Elapsed: 18.086852177s May 14 22:28:39.219: INFO: Pod "pod-subpath-test-projected-9wb6": Phase="Running", Reason="", readiness=true. Elapsed: 20.090649417s May 14 22:28:41.223: INFO: Pod "pod-subpath-test-projected-9wb6": Phase="Running", Reason="", readiness=true. Elapsed: 22.094496227s May 14 22:28:43.241: INFO: Pod "pod-subpath-test-projected-9wb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.113257512s STEP: Saw pod success May 14 22:28:43.241: INFO: Pod "pod-subpath-test-projected-9wb6" satisfied condition "success or failure" May 14 22:28:43.244: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-9wb6 container test-container-subpath-projected-9wb6: STEP: delete the pod May 14 22:28:43.326: INFO: Waiting for pod pod-subpath-test-projected-9wb6 to disappear May 14 22:28:43.342: INFO: Pod pod-subpath-test-projected-9wb6 no longer exists STEP: Deleting pod pod-subpath-test-projected-9wb6 May 14 22:28:43.342: INFO: Deleting pod "pod-subpath-test-projected-9wb6" in namespace "subpath-9587" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:28:43.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9587" for this suite. • [SLOW TEST:24.504 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":264,"skipped":4396,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:28:43.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 14 22:28:43.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8148' May 14 22:28:43.824: INFO: stderr: "" May 14 22:28:43.824: INFO: stdout: "pod/pause created\n" May 14 22:28:43.824: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 14 22:28:43.824: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8148" to be "running and ready" May 14 22:28:43.828: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.649806ms May 14 22:28:45.831: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00743917s May 14 22:28:47.894: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.070061692s May 14 22:28:47.894: INFO: Pod "pause" satisfied condition "running and ready" May 14 22:28:47.894: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 14 22:28:47.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8148' May 14 22:28:47.995: INFO: stderr: "" May 14 22:28:47.995: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 14 22:28:47.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8148' May 14 22:28:48.091: INFO: stderr: "" May 14 22:28:48.091: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 14 22:28:48.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8148' May 14 22:28:48.185: INFO: stderr: "" May 14 22:28:48.185: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 14 22:28:48.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8148' May 14 22:28:48.271: INFO: stderr: "" May 14 22:28:48.271: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 14 22:28:48.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8148' May 14 22:28:48.411: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 22:28:48.411: INFO: stdout: "pod \"pause\" force deleted\n" May 14 22:28:48.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8148' May 14 22:28:48.701: INFO: stderr: "No resources found in kubectl-8148 namespace.\n" May 14 22:28:48.702: INFO: stdout: "" May 14 22:28:48.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8148 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 14 22:28:48.942: INFO: stderr: "" May 14 22:28:48.942: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:28:48.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8148" for this suite. • [SLOW TEST:5.647 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":265,"skipped":4401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:28:49.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 22:28:50.524: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 22:28:52.535: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092130, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092130, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092130, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092130, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 22:28:54.539: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092130, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092130, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092130, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092130, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 22:28:57.573: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 14 22:28:57.592: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:28:57.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9740" for this suite. STEP: Destroying namespace "webhook-9740-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.758 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":266,"skipped":4429,"failed":0} SSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:28:57.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 22:28:57.842: INFO: Creating ReplicaSet my-hostname-basic-81536546-88fa-4f41-88df-4a64cbf774a4 May 14 22:28:57.876: INFO: Pod name my-hostname-basic-81536546-88fa-4f41-88df-4a64cbf774a4: Found 0 pods out of 1 May 14 22:29:02.900: INFO: Pod name my-hostname-basic-81536546-88fa-4f41-88df-4a64cbf774a4: Found 1 pods out of 1 May 14 22:29:02.900: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-81536546-88fa-4f41-88df-4a64cbf774a4" is running May 14 22:29:02.903: INFO: Pod "my-hostname-basic-81536546-88fa-4f41-88df-4a64cbf774a4-zzxbx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 22:28:57 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 22:29:01 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 22:29:01 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 22:28:57 +0000 UTC Reason: Message:}]) May 14 22:29:02.903: INFO: Trying to dial the pod May 14 22:29:07.917: INFO: Controller my-hostname-basic-81536546-88fa-4f41-88df-4a64cbf774a4: Got expected result from replica 1 [my-hostname-basic-81536546-88fa-4f41-88df-4a64cbf774a4-zzxbx]: "my-hostname-basic-81536546-88fa-4f41-88df-4a64cbf774a4-zzxbx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:29:07.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2044" for this suite. • [SLOW TEST:10.159 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":267,"skipped":4432,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:29:07.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 22:29:12.121: INFO: Waiting up to 5m0s for pod "client-envvars-153177a0-f26f-4b93-b273-0bc7990ad746" in namespace "pods-226" to be "success or failure" May 14 22:29:12.124: INFO: Pod "client-envvars-153177a0-f26f-4b93-b273-0bc7990ad746": Phase="Pending", Reason="", readiness=false. Elapsed: 3.015156ms May 14 22:29:14.247: INFO: Pod "client-envvars-153177a0-f26f-4b93-b273-0bc7990ad746": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125769208s May 14 22:29:16.251: INFO: Pod "client-envvars-153177a0-f26f-4b93-b273-0bc7990ad746": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129835776s STEP: Saw pod success May 14 22:29:16.251: INFO: Pod "client-envvars-153177a0-f26f-4b93-b273-0bc7990ad746" satisfied condition "success or failure" May 14 22:29:16.255: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-153177a0-f26f-4b93-b273-0bc7990ad746 container env3cont: STEP: delete the pod May 14 22:29:16.308: INFO: Waiting for pod client-envvars-153177a0-f26f-4b93-b273-0bc7990ad746 to disappear May 14 22:29:16.427: INFO: Pod client-envvars-153177a0-f26f-4b93-b273-0bc7990ad746 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:29:16.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-226" for this suite. • [SLOW TEST:8.508 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4445,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:29:16.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 14 22:29:16.609: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-108f1c49-43dd-4de6-be0a-6815681dd980" in namespace "security-context-test-3556" to be "success or failure" May 14 22:29:16.661: INFO: Pod "alpine-nnp-false-108f1c49-43dd-4de6-be0a-6815681dd980": Phase="Pending", Reason="", readiness=false. Elapsed: 51.77454ms May 14 22:29:18.666: INFO: Pod "alpine-nnp-false-108f1c49-43dd-4de6-be0a-6815681dd980": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057045193s May 14 22:29:20.670: INFO: Pod "alpine-nnp-false-108f1c49-43dd-4de6-be0a-6815681dd980": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060570126s May 14 22:29:20.670: INFO: Pod "alpine-nnp-false-108f1c49-43dd-4de6-be0a-6815681dd980" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:29:20.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3556" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4452,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:29:20.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-a4f34090-8189-47eb-a1a1-d9eaf108bca1 STEP: Creating a pod to test consume secrets May 14 22:29:20.765: INFO: Waiting up to 5m0s for pod "pod-secrets-bcf9e1e9-16c7-4e4f-af5f-1a766c453eb8" in namespace "secrets-3255" to be "success or failure" May 14 22:29:20.768: INFO: Pod "pod-secrets-bcf9e1e9-16c7-4e4f-af5f-1a766c453eb8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.124239ms May 14 22:29:22.775: INFO: Pod "pod-secrets-bcf9e1e9-16c7-4e4f-af5f-1a766c453eb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009618936s May 14 22:29:24.779: INFO: Pod "pod-secrets-bcf9e1e9-16c7-4e4f-af5f-1a766c453eb8": Phase="Running", Reason="", readiness=true. Elapsed: 4.014467514s May 14 22:29:26.783: INFO: Pod "pod-secrets-bcf9e1e9-16c7-4e4f-af5f-1a766c453eb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018302645s STEP: Saw pod success May 14 22:29:26.783: INFO: Pod "pod-secrets-bcf9e1e9-16c7-4e4f-af5f-1a766c453eb8" satisfied condition "success or failure" May 14 22:29:26.786: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-bcf9e1e9-16c7-4e4f-af5f-1a766c453eb8 container secret-env-test: STEP: delete the pod May 14 22:29:26.813: INFO: Waiting for pod pod-secrets-bcf9e1e9-16c7-4e4f-af5f-1a766c453eb8 to disappear May 14 22:29:26.829: INFO: Pod pod-secrets-bcf9e1e9-16c7-4e4f-af5f-1a766c453eb8 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:29:26.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3255" for this suite. • [SLOW TEST:6.152 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4455,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:29:26.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-d2ff3b97-224f-450b-a5d6-5ba2ae914672 STEP: Creating a pod to test consume configMaps May 14 22:29:26.914: INFO: Waiting up to 5m0s for pod "pod-configmaps-84db69cc-b939-413c-a477-ff2baacf42e4" in namespace "configmap-8900" to be "success or failure" May 14 22:29:26.918: INFO: Pod "pod-configmaps-84db69cc-b939-413c-a477-ff2baacf42e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.005831ms May 14 22:29:28.922: INFO: Pod "pod-configmaps-84db69cc-b939-413c-a477-ff2baacf42e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008088348s May 14 22:29:30.926: INFO: Pod "pod-configmaps-84db69cc-b939-413c-a477-ff2baacf42e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012061808s May 14 22:29:32.930: INFO: Pod "pod-configmaps-84db69cc-b939-413c-a477-ff2baacf42e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015580751s STEP: Saw pod success May 14 22:29:32.930: INFO: Pod "pod-configmaps-84db69cc-b939-413c-a477-ff2baacf42e4" satisfied condition "success or failure" May 14 22:29:32.932: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-84db69cc-b939-413c-a477-ff2baacf42e4 container configmap-volume-test: STEP: delete the pod May 14 22:29:32.963: INFO: Waiting for pod pod-configmaps-84db69cc-b939-413c-a477-ff2baacf42e4 to disappear May 14 22:29:32.979: INFO: Pod pod-configmaps-84db69cc-b939-413c-a477-ff2baacf42e4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:29:32.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8900" for this suite. • [SLOW TEST:6.151 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4467,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:29:32.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:29:46.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-164" for this suite. • [SLOW TEST:13.226 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":272,"skipped":4490,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:29:46.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 14 22:29:47.078: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5013bffc-3849-46ec-81b5-cd19711dae3c" in namespace "projected-9705" to be "success or failure" May 14 22:29:47.112: INFO: Pod "downwardapi-volume-5013bffc-3849-46ec-81b5-cd19711dae3c": Phase="Pending", Reason="", readiness=false. Elapsed: 33.293067ms May 14 22:29:49.116: INFO: Pod "downwardapi-volume-5013bffc-3849-46ec-81b5-cd19711dae3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037156944s May 14 22:29:51.182: INFO: Pod "downwardapi-volume-5013bffc-3849-46ec-81b5-cd19711dae3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103914684s STEP: Saw pod success May 14 22:29:51.182: INFO: Pod "downwardapi-volume-5013bffc-3849-46ec-81b5-cd19711dae3c" satisfied condition "success or failure" May 14 22:29:51.186: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-5013bffc-3849-46ec-81b5-cd19711dae3c container client-container: STEP: delete the pod May 14 22:29:51.208: INFO: Waiting for pod downwardapi-volume-5013bffc-3849-46ec-81b5-cd19711dae3c to disappear May 14 22:29:51.256: INFO: Pod downwardapi-volume-5013bffc-3849-46ec-81b5-cd19711dae3c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:29:51.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9705" for this suite. • [SLOW TEST:5.055 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4516,"failed":0} [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:29:51.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 14 22:29:51.783: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96e554d4-38cb-4be0-8e74-9273b8887693" in namespace "downward-api-3957" to be "success or failure" May 14 22:29:51.865: INFO: Pod "downwardapi-volume-96e554d4-38cb-4be0-8e74-9273b8887693": Phase="Pending", Reason="", readiness=false. Elapsed: 82.726763ms May 14 22:29:53.869: INFO: Pod "downwardapi-volume-96e554d4-38cb-4be0-8e74-9273b8887693": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086478093s May 14 22:29:56.441: INFO: Pod "downwardapi-volume-96e554d4-38cb-4be0-8e74-9273b8887693": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.658715844s STEP: Saw pod success May 14 22:29:56.441: INFO: Pod "downwardapi-volume-96e554d4-38cb-4be0-8e74-9273b8887693" satisfied condition "success or failure" May 14 22:29:56.444: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-96e554d4-38cb-4be0-8e74-9273b8887693 container client-container: STEP: delete the pod May 14 22:29:56.747: INFO: Waiting for pod downwardapi-volume-96e554d4-38cb-4be0-8e74-9273b8887693 to disappear May 14 22:29:56.782: INFO: Pod downwardapi-volume-96e554d4-38cb-4be0-8e74-9273b8887693 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:29:56.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3957" for this suite. • [SLOW TEST:5.538 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4516,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:29:56.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 14 22:29:57.104: INFO: Waiting up to 5m0s for pod "pod-d5a533b5-44d1-46dd-b7ac-e5e0f8175d8e" in namespace "emptydir-6146" to be "success or failure" May 14 22:29:57.107: INFO: Pod "pod-d5a533b5-44d1-46dd-b7ac-e5e0f8175d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.999119ms May 14 22:29:59.112: INFO: Pod "pod-d5a533b5-44d1-46dd-b7ac-e5e0f8175d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00721516s May 14 22:30:01.115: INFO: Pod "pod-d5a533b5-44d1-46dd-b7ac-e5e0f8175d8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010483323s STEP: Saw pod success May 14 22:30:01.115: INFO: Pod "pod-d5a533b5-44d1-46dd-b7ac-e5e0f8175d8e" satisfied condition "success or failure" May 14 22:30:01.118: INFO: Trying to get logs from node jerma-worker pod pod-d5a533b5-44d1-46dd-b7ac-e5e0f8175d8e container test-container: STEP: delete the pod May 14 22:30:01.137: INFO: Waiting for pod pod-d5a533b5-44d1-46dd-b7ac-e5e0f8175d8e to disappear May 14 22:30:01.142: INFO: Pod pod-d5a533b5-44d1-46dd-b7ac-e5e0f8175d8e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:30:01.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6146" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4527,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:30:01.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:30:12.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1592" for this suite. • [SLOW TEST:11.155 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":276,"skipped":4554,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:30:12.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 14 22:30:12.854: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 14 22:30:14.864: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092212, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092212, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092212, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725092212, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 14 22:30:17.934: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:30:28.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4655" for this suite. STEP: Destroying namespace "webhook-4655-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.865 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":277,"skipped":4554,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 14 22:30:28.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 14 22:30:28.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 14 22:30:28.503: INFO: stderr: "" May 14 22:30:28.503: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 14 22:30:28.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9363" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":278,"skipped":4558,"failed":0} SSSSSSMay 14 22:30:28.511: INFO: Running AfterSuite actions on all nodes May 14 22:30:28.511: INFO: Running AfterSuite actions on node 1 May 14 22:30:28.511: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4835.753 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS