I0319 21:07:39.635497 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0319 21:07:39.635769 6 e2e.go:109] Starting e2e run "4bf8de45-bc7e-47e5-b1ad-d03fb933c17a" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584652058 - Will randomize all specs Will run 278 of 4843 specs Mar 19 21:07:39.698: INFO: >>> kubeConfig: /root/.kube/config Mar 19 21:07:39.700: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 19 21:07:39.719: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 19 21:07:39.756: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 19 21:07:39.756: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 19 21:07:39.756: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 19 21:07:39.770: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 19 21:07:39.770: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 19 21:07:39.770: INFO: e2e test version: v1.17.3 Mar 19 21:07:39.771: INFO: kube-apiserver version: v1.17.2 Mar 19 21:07:39.771: INFO: >>> kubeConfig: /root/.kube/config Mar 19 21:07:39.777: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:07:39.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook Mar 19 21:07:39.862: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 19 21:07:40.258: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 19 21:07:42.268: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720248860, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720248860, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720248860, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720248860, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 21:07:45.301: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:07:45.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-804" for this suite. STEP: Destroying namespace "webhook-804-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.790 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":1,"skipped":43,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:07:45.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 19 21:07:45.624: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:07:51.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2538" for this suite. • [SLOW TEST:5.646 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":2,"skipped":50,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:07:51.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-bnmj STEP: Creating a pod to test atomic-volume-subpath Mar 19 21:07:51.332: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-bnmj" in namespace "subpath-133" to be "success or failure" Mar 19 21:07:51.337: INFO: Pod "pod-subpath-test-downwardapi-bnmj": Phase="Pending", Reason="", readiness=false. Elapsed: 5.152557ms Mar 19 21:07:53.366: INFO: Pod "pod-subpath-test-downwardapi-bnmj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034543287s Mar 19 21:07:55.369: INFO: Pod "pod-subpath-test-downwardapi-bnmj": Phase="Running", Reason="", readiness=true. Elapsed: 4.037499534s Mar 19 21:07:57.375: INFO: Pod "pod-subpath-test-downwardapi-bnmj": Phase="Running", Reason="", readiness=true. Elapsed: 6.043505996s Mar 19 21:07:59.379: INFO: Pod "pod-subpath-test-downwardapi-bnmj": Phase="Running", Reason="", readiness=true. Elapsed: 8.047779182s Mar 19 21:08:01.384: INFO: Pod "pod-subpath-test-downwardapi-bnmj": Phase="Running", Reason="", readiness=true. Elapsed: 10.051988219s Mar 19 21:08:03.388: INFO: Pod "pod-subpath-test-downwardapi-bnmj": Phase="Running", Reason="", readiness=true. Elapsed: 12.055969353s Mar 19 21:08:05.392: INFO: Pod "pod-subpath-test-downwardapi-bnmj": Phase="Running", Reason="", readiness=true. Elapsed: 14.060289998s Mar 19 21:08:07.396: INFO: Pod "pod-subpath-test-downwardapi-bnmj": Phase="Running", Reason="", readiness=true. Elapsed: 16.064246234s Mar 19 21:08:09.400: INFO: Pod "pod-subpath-test-downwardapi-bnmj": Phase="Running", Reason="", readiness=true. Elapsed: 18.06801694s Mar 19 21:08:11.404: INFO: Pod "pod-subpath-test-downwardapi-bnmj": Phase="Running", Reason="", readiness=true. Elapsed: 20.072326295s Mar 19 21:08:13.408: INFO: Pod "pod-subpath-test-downwardapi-bnmj": Phase="Running", Reason="", readiness=true. Elapsed: 22.076359573s Mar 19 21:08:15.412: INFO: Pod "pod-subpath-test-downwardapi-bnmj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.080570927s STEP: Saw pod success Mar 19 21:08:15.412: INFO: Pod "pod-subpath-test-downwardapi-bnmj" satisfied condition "success or failure" Mar 19 21:08:15.415: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-bnmj container test-container-subpath-downwardapi-bnmj: STEP: delete the pod Mar 19 21:08:15.449: INFO: Waiting for pod pod-subpath-test-downwardapi-bnmj to disappear Mar 19 21:08:15.453: INFO: Pod pod-subpath-test-downwardapi-bnmj no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-bnmj Mar 19 21:08:15.453: INFO: Deleting pod "pod-subpath-test-downwardapi-bnmj" in namespace "subpath-133" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:08:15.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-133" for this suite. • [SLOW TEST:24.247 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":3,"skipped":66,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:08:15.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 19 21:08:15.529: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:08:23.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5553" for this suite. • [SLOW TEST:7.805 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":4,"skipped":80,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:08:23.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-07282de3-7e8d-4651-859c-399b341b2f37 STEP: Creating a pod to test consume configMaps Mar 19 21:08:23.356: INFO: Waiting up to 5m0s for pod "pod-configmaps-2ec74b7f-e0b3-4be7-9505-6d4fec96bee9" in namespace "configmap-3798" to be "success or failure" Mar 19 21:08:23.360: INFO: Pod "pod-configmaps-2ec74b7f-e0b3-4be7-9505-6d4fec96bee9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232397ms Mar 19 21:08:25.376: INFO: Pod "pod-configmaps-2ec74b7f-e0b3-4be7-9505-6d4fec96bee9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020189739s Mar 19 21:08:27.380: INFO: Pod "pod-configmaps-2ec74b7f-e0b3-4be7-9505-6d4fec96bee9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024287449s STEP: Saw pod success Mar 19 21:08:27.380: INFO: Pod "pod-configmaps-2ec74b7f-e0b3-4be7-9505-6d4fec96bee9" satisfied condition "success or failure" Mar 19 21:08:27.383: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-2ec74b7f-e0b3-4be7-9505-6d4fec96bee9 container configmap-volume-test: STEP: delete the pod Mar 19 21:08:27.431: INFO: Waiting for pod pod-configmaps-2ec74b7f-e0b3-4be7-9505-6d4fec96bee9 to disappear Mar 19 21:08:27.444: INFO: Pod pod-configmaps-2ec74b7f-e0b3-4be7-9505-6d4fec96bee9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:08:27.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3798" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":87,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:08:27.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 19 21:08:27.513: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 19 21:08:37.842: INFO: >>> kubeConfig: /root/.kube/config Mar 19 21:08:39.813: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:08:51.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6775" for this suite. • [SLOW TEST:23.822 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":6,"skipped":90,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:08:51.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-48766cd5-480e-4d74-9062-7b4ea6580a47 Mar 19 21:08:51.331: INFO: Pod name my-hostname-basic-48766cd5-480e-4d74-9062-7b4ea6580a47: Found 0 pods out of 1 Mar 19 21:08:56.334: INFO: Pod name my-hostname-basic-48766cd5-480e-4d74-9062-7b4ea6580a47: Found 1 pods out of 1 Mar 19 21:08:56.334: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-48766cd5-480e-4d74-9062-7b4ea6580a47" are running Mar 19 21:08:56.337: INFO: Pod "my-hostname-basic-48766cd5-480e-4d74-9062-7b4ea6580a47-7kbc5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-19 21:08:51 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-19 21:08:54 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-19 21:08:54 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-19 21:08:51 +0000 UTC Reason: Message:}]) Mar 19 21:08:56.337: INFO: Trying to dial the pod Mar 19 21:09:01.349: INFO: Controller my-hostname-basic-48766cd5-480e-4d74-9062-7b4ea6580a47: Got expected result from replica 1 [my-hostname-basic-48766cd5-480e-4d74-9062-7b4ea6580a47-7kbc5]: "my-hostname-basic-48766cd5-480e-4d74-9062-7b4ea6580a47-7kbc5", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:09:01.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3031" for this suite. • [SLOW TEST:10.084 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":7,"skipped":108,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:09:01.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Mar 19 21:09:01.925: INFO: created pod pod-service-account-defaultsa Mar 19 21:09:01.925: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 19 21:09:01.934: INFO: created pod pod-service-account-mountsa Mar 19 21:09:01.934: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 19 21:09:02.013: INFO: created pod pod-service-account-nomountsa Mar 19 21:09:02.013: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 19 21:09:02.030: INFO: created pod pod-service-account-defaultsa-mountspec Mar 19 21:09:02.030: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 19 21:09:02.079: INFO: created pod pod-service-account-mountsa-mountspec Mar 19 21:09:02.079: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 19 21:09:02.150: INFO: created pod pod-service-account-nomountsa-mountspec Mar 19 21:09:02.150: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 19 21:09:02.169: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 19 21:09:02.169: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 19 21:09:02.185: INFO: created pod pod-service-account-mountsa-nomountspec Mar 19 21:09:02.186: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 19 21:09:02.211: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 19 21:09:02.211: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:09:02.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7790" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":8,"skipped":132,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:09:02.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:09:14.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5384" for this suite. • [SLOW TEST:12.214 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":164,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:09:14.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:09:14.586: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.696876ms) Mar 19 21:09:14.589: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.194023ms) Mar 19 21:09:14.593: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.486242ms) Mar 19 21:09:14.617: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 24.515222ms) Mar 19 21:09:14.621: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.449068ms) Mar 19 21:09:14.624: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.279142ms) Mar 19 21:09:14.627: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.145943ms) Mar 19 21:09:14.631: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.214982ms) Mar 19 21:09:14.634: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.826882ms) Mar 19 21:09:14.637: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.198713ms) Mar 19 21:09:14.640: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.029553ms) Mar 19 21:09:14.643: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.204171ms) Mar 19 21:09:14.647: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.666771ms) Mar 19 21:09:14.650: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.509602ms) Mar 19 21:09:14.654: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.672541ms) Mar 19 21:09:14.658: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.548767ms) Mar 19 21:09:14.661: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.597403ms) Mar 19 21:09:14.665: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.937574ms) Mar 19 21:09:14.669: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.745808ms) Mar 19 21:09:14.673: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.462039ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:09:14.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2530" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":10,"skipped":174,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:09:14.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:09:18.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8617" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":180,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:09:18.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 19 21:09:18.844: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8236 /api/v1/namespaces/watch-8236/configmaps/e2e-watch-test-configmap-a b28643dd-542b-40d3-8e5e-2badf5d26813 1108795 0 2020-03-19 21:09:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 19 21:09:18.844: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8236 /api/v1/namespaces/watch-8236/configmaps/e2e-watch-test-configmap-a b28643dd-542b-40d3-8e5e-2badf5d26813 1108795 0 2020-03-19 21:09:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 19 21:09:28.852: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8236 /api/v1/namespaces/watch-8236/configmaps/e2e-watch-test-configmap-a b28643dd-542b-40d3-8e5e-2badf5d26813 1108848 0 2020-03-19 21:09:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 19 21:09:28.852: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8236 /api/v1/namespaces/watch-8236/configmaps/e2e-watch-test-configmap-a b28643dd-542b-40d3-8e5e-2badf5d26813 1108848 0 2020-03-19 21:09:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 19 21:09:38.859: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8236 /api/v1/namespaces/watch-8236/configmaps/e2e-watch-test-configmap-a b28643dd-542b-40d3-8e5e-2badf5d26813 1108880 0 2020-03-19 21:09:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 19 21:09:38.860: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8236 /api/v1/namespaces/watch-8236/configmaps/e2e-watch-test-configmap-a b28643dd-542b-40d3-8e5e-2badf5d26813 1108880 0 2020-03-19 21:09:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 19 21:09:48.866: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8236 /api/v1/namespaces/watch-8236/configmaps/e2e-watch-test-configmap-a b28643dd-542b-40d3-8e5e-2badf5d26813 1108910 0 2020-03-19 21:09:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 19 21:09:48.866: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8236 /api/v1/namespaces/watch-8236/configmaps/e2e-watch-test-configmap-a b28643dd-542b-40d3-8e5e-2badf5d26813 1108910 0 2020-03-19 21:09:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 19 21:09:58.873: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8236 /api/v1/namespaces/watch-8236/configmaps/e2e-watch-test-configmap-b 79edd488-db8b-4acd-b335-d61e07078a35 1108939 0 2020-03-19 21:09:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 19 21:09:58.874: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8236 /api/v1/namespaces/watch-8236/configmaps/e2e-watch-test-configmap-b 79edd488-db8b-4acd-b335-d61e07078a35 1108939 0 2020-03-19 21:09:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 19 21:10:08.880: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8236 /api/v1/namespaces/watch-8236/configmaps/e2e-watch-test-configmap-b 79edd488-db8b-4acd-b335-d61e07078a35 1108973 0 2020-03-19 21:09:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 19 21:10:08.880: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8236 /api/v1/namespaces/watch-8236/configmaps/e2e-watch-test-configmap-b 79edd488-db8b-4acd-b335-d61e07078a35 1108973 0 2020-03-19 21:09:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:10:18.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8236" for this suite. • [SLOW TEST:60.105 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":12,"skipped":190,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:10:18.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:10:30.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4459" for this suite. • [SLOW TEST:11.171 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":13,"skipped":214,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:10:30.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 19 21:10:30.143: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3194 /api/v1/namespaces/watch-3194/configmaps/e2e-watch-test-resource-version 9adad8ff-47f6-4da7-9f1b-deb253a1f799 1109061 0 2020-03-19 21:10:30 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 19 21:10:30.143: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3194 /api/v1/namespaces/watch-3194/configmaps/e2e-watch-test-resource-version 9adad8ff-47f6-4da7-9f1b-deb253a1f799 1109062 0 2020-03-19 21:10:30 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:10:30.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3194" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":14,"skipped":222,"failed":0} SS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:10:30.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:11:30.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8836" for this suite. • [SLOW TEST:60.077 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":224,"failed":0} [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:11:30.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 19 21:11:30.319: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10130779-1ca6-4db9-a92d-b55487d8122e" in namespace "downward-api-6293" to be "success or failure" Mar 19 21:11:30.322: INFO: Pod "downwardapi-volume-10130779-1ca6-4db9-a92d-b55487d8122e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.736974ms Mar 19 21:11:32.326: INFO: Pod "downwardapi-volume-10130779-1ca6-4db9-a92d-b55487d8122e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006486564s Mar 19 21:11:34.330: INFO: Pod "downwardapi-volume-10130779-1ca6-4db9-a92d-b55487d8122e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010725945s STEP: Saw pod success Mar 19 21:11:34.330: INFO: Pod "downwardapi-volume-10130779-1ca6-4db9-a92d-b55487d8122e" satisfied condition "success or failure" Mar 19 21:11:34.333: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-10130779-1ca6-4db9-a92d-b55487d8122e container client-container: STEP: delete the pod Mar 19 21:11:34.366: INFO: Waiting for pod downwardapi-volume-10130779-1ca6-4db9-a92d-b55487d8122e to disappear Mar 19 21:11:34.377: INFO: Pod downwardapi-volume-10130779-1ca6-4db9-a92d-b55487d8122e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:11:34.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6293" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:11:34.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-1308 STEP: creating replication controller nodeport-test in namespace services-1308 I0319 21:11:34.625779 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-1308, replica count: 2 I0319 21:11:37.676330 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0319 21:11:40.676661 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 19 21:11:40.676: INFO: Creating new exec pod Mar 19 21:11:45.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1308 execpodrnq5h -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 19 21:11:48.088: INFO: stderr: "I0319 21:11:48.017099 33 log.go:172] (0xc0001016b0) (0xc00061bc20) Create stream\nI0319 21:11:48.017229 33 log.go:172] (0xc0001016b0) (0xc00061bc20) Stream added, broadcasting: 1\nI0319 21:11:48.020197 33 log.go:172] (0xc0001016b0) Reply frame received for 1\nI0319 21:11:48.020273 33 log.go:172] (0xc0001016b0) (0xc000760000) Create stream\nI0319 21:11:48.020291 33 log.go:172] (0xc0001016b0) (0xc000760000) Stream added, broadcasting: 3\nI0319 21:11:48.021454 33 log.go:172] (0xc0001016b0) Reply frame received for 3\nI0319 21:11:48.021493 33 log.go:172] (0xc0001016b0) (0xc000766000) Create stream\nI0319 21:11:48.021503 33 log.go:172] (0xc0001016b0) (0xc000766000) Stream added, broadcasting: 5\nI0319 21:11:48.022366 33 log.go:172] (0xc0001016b0) Reply frame received for 5\nI0319 21:11:48.081269 33 log.go:172] (0xc0001016b0) Data frame received for 5\nI0319 21:11:48.081313 33 log.go:172] (0xc000766000) (5) Data frame handling\nI0319 21:11:48.081335 33 log.go:172] (0xc000766000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0319 21:11:48.081614 33 log.go:172] (0xc0001016b0) Data frame received for 5\nI0319 21:11:48.081636 33 log.go:172] (0xc000766000) (5) Data frame handling\nI0319 21:11:48.081660 33 log.go:172] (0xc000766000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0319 21:11:48.082059 33 log.go:172] (0xc0001016b0) Data frame received for 5\nI0319 21:11:48.082093 33 log.go:172] (0xc000766000) (5) Data frame handling\nI0319 21:11:48.082228 33 log.go:172] (0xc0001016b0) Data frame received for 3\nI0319 21:11:48.082255 33 log.go:172] (0xc000760000) (3) Data frame handling\nI0319 21:11:48.083893 33 log.go:172] (0xc0001016b0) Data frame received for 1\nI0319 21:11:48.083937 33 log.go:172] (0xc00061bc20) (1) Data frame handling\nI0319 21:11:48.083965 33 log.go:172] (0xc00061bc20) (1) Data frame sent\nI0319 21:11:48.083987 33 log.go:172] (0xc0001016b0) (0xc00061bc20) Stream removed, broadcasting: 1\nI0319 21:11:48.084009 33 log.go:172] (0xc0001016b0) Go away received\nI0319 21:11:48.084560 33 log.go:172] (0xc0001016b0) (0xc00061bc20) Stream removed, broadcasting: 1\nI0319 21:11:48.084585 33 log.go:172] (0xc0001016b0) (0xc000760000) Stream removed, broadcasting: 3\nI0319 21:11:48.084597 33 log.go:172] (0xc0001016b0) (0xc000766000) Stream removed, broadcasting: 5\n" Mar 19 21:11:48.088: INFO: stdout: "" Mar 19 21:11:48.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1308 execpodrnq5h -- /bin/sh -x -c nc -zv -t -w 2 10.106.6.190 80' Mar 19 21:11:48.304: INFO: stderr: "I0319 21:11:48.216830 66 log.go:172] (0xc0007fca50) (0xc0007f81e0) Create stream\nI0319 21:11:48.216879 66 log.go:172] (0xc0007fca50) (0xc0007f81e0) Stream added, broadcasting: 1\nI0319 21:11:48.219764 66 log.go:172] (0xc0007fca50) Reply frame received for 1\nI0319 21:11:48.219789 66 log.go:172] (0xc0007fca50) (0xc000784aa0) Create stream\nI0319 21:11:48.219799 66 log.go:172] (0xc0007fca50) (0xc000784aa0) Stream added, broadcasting: 3\nI0319 21:11:48.220872 66 log.go:172] (0xc0007fca50) Reply frame received for 3\nI0319 21:11:48.220919 66 log.go:172] (0xc0007fca50) (0xc0007f8280) Create stream\nI0319 21:11:48.220939 66 log.go:172] (0xc0007fca50) (0xc0007f8280) Stream added, broadcasting: 5\nI0319 21:11:48.222114 66 log.go:172] (0xc0007fca50) Reply frame received for 5\nI0319 21:11:48.297843 66 log.go:172] (0xc0007fca50) Data frame received for 3\nI0319 21:11:48.297876 66 log.go:172] (0xc000784aa0) (3) Data frame handling\nI0319 21:11:48.297929 66 log.go:172] (0xc0007fca50) Data frame received for 5\nI0319 21:11:48.297944 66 log.go:172] (0xc0007f8280) (5) Data frame handling\nI0319 21:11:48.297967 66 log.go:172] (0xc0007f8280) (5) Data frame sent\n+ nc -zv -t -w 2 10.106.6.190 80\nConnection to 10.106.6.190 80 port [tcp/http] succeeded!\nI0319 21:11:48.298228 66 log.go:172] (0xc0007fca50) Data frame received for 5\nI0319 21:11:48.298265 66 log.go:172] (0xc0007f8280) (5) Data frame handling\nI0319 21:11:48.299786 66 log.go:172] (0xc0007fca50) Data frame received for 1\nI0319 21:11:48.299826 66 log.go:172] (0xc0007f81e0) (1) Data frame handling\nI0319 21:11:48.299874 66 log.go:172] (0xc0007f81e0) (1) Data frame sent\nI0319 21:11:48.299896 66 log.go:172] (0xc0007fca50) (0xc0007f81e0) Stream removed, broadcasting: 1\nI0319 21:11:48.299913 66 log.go:172] (0xc0007fca50) Go away received\nI0319 21:11:48.300422 66 log.go:172] (0xc0007fca50) (0xc0007f81e0) Stream removed, broadcasting: 1\nI0319 21:11:48.300441 66 log.go:172] (0xc0007fca50) (0xc000784aa0) Stream removed, broadcasting: 3\nI0319 21:11:48.300450 66 log.go:172] (0xc0007fca50) (0xc0007f8280) Stream removed, broadcasting: 5\n" Mar 19 21:11:48.304: INFO: stdout: "" Mar 19 21:11:48.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1308 execpodrnq5h -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32603' Mar 19 21:11:48.504: INFO: stderr: "I0319 21:11:48.439035 89 log.go:172] (0xc0003c4790) (0xc0007941e0) Create stream\nI0319 21:11:48.439101 89 log.go:172] (0xc0003c4790) (0xc0007941e0) Stream added, broadcasting: 1\nI0319 21:11:48.446092 89 log.go:172] (0xc0003c4790) Reply frame received for 1\nI0319 21:11:48.446131 89 log.go:172] (0xc0003c4790) (0xc0002039a0) Create stream\nI0319 21:11:48.446145 89 log.go:172] (0xc0003c4790) (0xc0002039a0) Stream added, broadcasting: 3\nI0319 21:11:48.447289 89 log.go:172] (0xc0003c4790) Reply frame received for 3\nI0319 21:11:48.447320 89 log.go:172] (0xc0003c4790) (0xc000794320) Create stream\nI0319 21:11:48.447330 89 log.go:172] (0xc0003c4790) (0xc000794320) Stream added, broadcasting: 5\nI0319 21:11:48.448211 89 log.go:172] (0xc0003c4790) Reply frame received for 5\nI0319 21:11:48.497242 89 log.go:172] (0xc0003c4790) Data frame received for 5\nI0319 21:11:48.497262 89 log.go:172] (0xc000794320) (5) Data frame handling\nI0319 21:11:48.497282 89 log.go:172] (0xc000794320) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 32603\nConnection to 172.17.0.10 32603 port [tcp/32603] succeeded!\nI0319 21:11:48.497394 89 log.go:172] (0xc0003c4790) Data frame received for 5\nI0319 21:11:48.497403 89 log.go:172] (0xc000794320) (5) Data frame handling\nI0319 21:11:48.497814 89 log.go:172] (0xc0003c4790) Data frame received for 3\nI0319 21:11:48.497852 89 log.go:172] (0xc0002039a0) (3) Data frame handling\nI0319 21:11:48.499718 89 log.go:172] (0xc0003c4790) Data frame received for 1\nI0319 21:11:48.499744 89 log.go:172] (0xc0007941e0) (1) Data frame handling\nI0319 21:11:48.499760 89 log.go:172] (0xc0007941e0) (1) Data frame sent\nI0319 21:11:48.499787 89 log.go:172] (0xc0003c4790) (0xc0007941e0) Stream removed, broadcasting: 1\nI0319 21:11:48.499811 89 log.go:172] (0xc0003c4790) Go away received\nI0319 21:11:48.500235 89 log.go:172] (0xc0003c4790) (0xc0007941e0) Stream removed, broadcasting: 1\nI0319 21:11:48.500270 89 log.go:172] (0xc0003c4790) (0xc0002039a0) Stream removed, broadcasting: 3\nI0319 21:11:48.500286 89 log.go:172] (0xc0003c4790) (0xc000794320) Stream removed, broadcasting: 5\n" Mar 19 21:11:48.504: INFO: stdout: "" Mar 19 21:11:48.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1308 execpodrnq5h -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32603' Mar 19 21:11:48.729: INFO: stderr: "I0319 21:11:48.643682 112 log.go:172] (0xc0003d0d10) (0xc0006d9cc0) Create stream\nI0319 21:11:48.643733 112 log.go:172] (0xc0003d0d10) (0xc0006d9cc0) Stream added, broadcasting: 1\nI0319 21:11:48.646328 112 log.go:172] (0xc0003d0d10) Reply frame received for 1\nI0319 21:11:48.646369 112 log.go:172] (0xc0003d0d10) (0xc000289400) Create stream\nI0319 21:11:48.646382 112 log.go:172] (0xc0003d0d10) (0xc000289400) Stream added, broadcasting: 3\nI0319 21:11:48.647720 112 log.go:172] (0xc0003d0d10) Reply frame received for 3\nI0319 21:11:48.647795 112 log.go:172] (0xc0003d0d10) (0xc0009a4000) Create stream\nI0319 21:11:48.647826 112 log.go:172] (0xc0003d0d10) (0xc0009a4000) Stream added, broadcasting: 5\nI0319 21:11:48.648981 112 log.go:172] (0xc0003d0d10) Reply frame received for 5\nI0319 21:11:48.723413 112 log.go:172] (0xc0003d0d10) Data frame received for 3\nI0319 21:11:48.723481 112 log.go:172] (0xc000289400) (3) Data frame handling\nI0319 21:11:48.723516 112 log.go:172] (0xc0003d0d10) Data frame received for 5\nI0319 21:11:48.723550 112 log.go:172] (0xc0009a4000) (5) Data frame handling\nI0319 21:11:48.723580 112 log.go:172] (0xc0009a4000) (5) Data frame sent\nI0319 21:11:48.723600 112 log.go:172] (0xc0003d0d10) Data frame received for 5\nI0319 21:11:48.723618 112 log.go:172] (0xc0009a4000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 32603\nConnection to 172.17.0.8 32603 port [tcp/32603] succeeded!\nI0319 21:11:48.724738 112 log.go:172] (0xc0003d0d10) Data frame received for 1\nI0319 21:11:48.724754 112 log.go:172] (0xc0006d9cc0) (1) Data frame handling\nI0319 21:11:48.724774 112 log.go:172] (0xc0006d9cc0) (1) Data frame sent\nI0319 21:11:48.724787 112 log.go:172] (0xc0003d0d10) (0xc0006d9cc0) Stream removed, broadcasting: 1\nI0319 21:11:48.725006 112 log.go:172] (0xc0003d0d10) Go away received\nI0319 21:11:48.725038 112 log.go:172] (0xc0003d0d10) (0xc0006d9cc0) Stream removed, broadcasting: 1\nI0319 21:11:48.725052 112 log.go:172] (0xc0003d0d10) (0xc000289400) Stream removed, broadcasting: 3\nI0319 21:11:48.725059 112 log.go:172] (0xc0003d0d10) (0xc0009a4000) Stream removed, broadcasting: 5\n" Mar 19 21:11:48.729: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:11:48.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1308" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.293 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":17,"skipped":248,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:11:48.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Mar 19 21:11:48.823: INFO: Waiting up to 5m0s for pod "client-containers-54c376d9-7b71-4c46-a126-7a90fb82eb74" in namespace "containers-4472" to be "success or failure" Mar 19 21:11:48.827: INFO: Pod "client-containers-54c376d9-7b71-4c46-a126-7a90fb82eb74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167939ms Mar 19 21:11:50.830: INFO: Pod "client-containers-54c376d9-7b71-4c46-a126-7a90fb82eb74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007672142s Mar 19 21:11:52.834: INFO: Pod "client-containers-54c376d9-7b71-4c46-a126-7a90fb82eb74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011537642s STEP: Saw pod success Mar 19 21:11:52.834: INFO: Pod "client-containers-54c376d9-7b71-4c46-a126-7a90fb82eb74" satisfied condition "success or failure" Mar 19 21:11:52.837: INFO: Trying to get logs from node jerma-worker2 pod client-containers-54c376d9-7b71-4c46-a126-7a90fb82eb74 container test-container: STEP: delete the pod Mar 19 21:11:52.873: INFO: Waiting for pod client-containers-54c376d9-7b71-4c46-a126-7a90fb82eb74 to disappear Mar 19 21:11:52.887: INFO: Pod client-containers-54c376d9-7b71-4c46-a126-7a90fb82eb74 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:11:52.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4472" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":254,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:11:52.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3349.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3349.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3349.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3349.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 19 21:11:59.012: INFO: File jessie_udp@dns-test-service-3.dns-3349.svc.cluster.local from pod dns-3349/dns-test-5f334f61-eca5-40e0-aa50-9bb6a6f4123d contains '' instead of 'foo.example.com.' Mar 19 21:11:59.012: INFO: Lookups using dns-3349/dns-test-5f334f61-eca5-40e0-aa50-9bb6a6f4123d failed for: [jessie_udp@dns-test-service-3.dns-3349.svc.cluster.local] Mar 19 21:12:04.021: INFO: DNS probes using dns-test-5f334f61-eca5-40e0-aa50-9bb6a6f4123d succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3349.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3349.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3349.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3349.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 19 21:12:10.136: INFO: File wheezy_udp@dns-test-service-3.dns-3349.svc.cluster.local from pod dns-3349/dns-test-c7406111-60d1-4d7f-bd0c-a52087394b09 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 19 21:12:10.140: INFO: File jessie_udp@dns-test-service-3.dns-3349.svc.cluster.local from pod dns-3349/dns-test-c7406111-60d1-4d7f-bd0c-a52087394b09 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 19 21:12:10.140: INFO: Lookups using dns-3349/dns-test-c7406111-60d1-4d7f-bd0c-a52087394b09 failed for: [wheezy_udp@dns-test-service-3.dns-3349.svc.cluster.local jessie_udp@dns-test-service-3.dns-3349.svc.cluster.local] Mar 19 21:12:15.145: INFO: File wheezy_udp@dns-test-service-3.dns-3349.svc.cluster.local from pod dns-3349/dns-test-c7406111-60d1-4d7f-bd0c-a52087394b09 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 19 21:12:15.149: INFO: File jessie_udp@dns-test-service-3.dns-3349.svc.cluster.local from pod dns-3349/dns-test-c7406111-60d1-4d7f-bd0c-a52087394b09 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 19 21:12:15.149: INFO: Lookups using dns-3349/dns-test-c7406111-60d1-4d7f-bd0c-a52087394b09 failed for: [wheezy_udp@dns-test-service-3.dns-3349.svc.cluster.local jessie_udp@dns-test-service-3.dns-3349.svc.cluster.local] Mar 19 21:12:20.145: INFO: File wheezy_udp@dns-test-service-3.dns-3349.svc.cluster.local from pod dns-3349/dns-test-c7406111-60d1-4d7f-bd0c-a52087394b09 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 19 21:12:20.148: INFO: File jessie_udp@dns-test-service-3.dns-3349.svc.cluster.local from pod dns-3349/dns-test-c7406111-60d1-4d7f-bd0c-a52087394b09 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 19 21:12:20.148: INFO: Lookups using dns-3349/dns-test-c7406111-60d1-4d7f-bd0c-a52087394b09 failed for: [wheezy_udp@dns-test-service-3.dns-3349.svc.cluster.local jessie_udp@dns-test-service-3.dns-3349.svc.cluster.local] Mar 19 21:12:25.145: INFO: File wheezy_udp@dns-test-service-3.dns-3349.svc.cluster.local from pod dns-3349/dns-test-c7406111-60d1-4d7f-bd0c-a52087394b09 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 19 21:12:25.149: INFO: File jessie_udp@dns-test-service-3.dns-3349.svc.cluster.local from pod dns-3349/dns-test-c7406111-60d1-4d7f-bd0c-a52087394b09 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 19 21:12:25.149: INFO: Lookups using dns-3349/dns-test-c7406111-60d1-4d7f-bd0c-a52087394b09 failed for: [wheezy_udp@dns-test-service-3.dns-3349.svc.cluster.local jessie_udp@dns-test-service-3.dns-3349.svc.cluster.local] Mar 19 21:12:30.150: INFO: DNS probes using dns-test-c7406111-60d1-4d7f-bd0c-a52087394b09 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3349.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3349.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3349.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3349.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 19 21:12:36.746: INFO: DNS probes using dns-test-e31592bf-bbe9-4593-90bc-2d3ba98d9258 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:12:36.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3349" for this suite. • [SLOW TEST:44.199 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":19,"skipped":270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:12:37.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 19 21:12:37.178: INFO: Waiting up to 5m0s for pod "pod-74539a20-dc5f-4342-af56-486a01d0f2b8" in namespace "emptydir-5816" to be "success or failure" Mar 19 21:12:37.182: INFO: Pod "pod-74539a20-dc5f-4342-af56-486a01d0f2b8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.852951ms Mar 19 21:12:39.186: INFO: Pod "pod-74539a20-dc5f-4342-af56-486a01d0f2b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007985613s Mar 19 21:12:41.190: INFO: Pod "pod-74539a20-dc5f-4342-af56-486a01d0f2b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012117011s STEP: Saw pod success Mar 19 21:12:41.190: INFO: Pod "pod-74539a20-dc5f-4342-af56-486a01d0f2b8" satisfied condition "success or failure" Mar 19 21:12:41.192: INFO: Trying to get logs from node jerma-worker pod pod-74539a20-dc5f-4342-af56-486a01d0f2b8 container test-container: STEP: delete the pod Mar 19 21:12:41.226: INFO: Waiting for pod pod-74539a20-dc5f-4342-af56-486a01d0f2b8 to disappear Mar 19 21:12:41.278: INFO: Pod pod-74539a20-dc5f-4342-af56-486a01d0f2b8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:12:41.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5816" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":294,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:12:41.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5860.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5860.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5860.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5860.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5860.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5860.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 19 21:12:47.378: INFO: DNS probes using dns-5860/dns-test-7be53323-1ca9-4e87-8c97-a49e6f0bc977 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:12:47.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5860" for this suite. • [SLOW TEST:6.171 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":21,"skipped":326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:12:47.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 19 21:12:47.812: INFO: Waiting up to 5m0s for pod "downwardapi-volume-561ef36f-2ea7-4557-ae62-3ede7971f64d" in namespace "projected-4718" to be "success or failure" Mar 19 21:12:47.856: INFO: Pod "downwardapi-volume-561ef36f-2ea7-4557-ae62-3ede7971f64d": Phase="Pending", Reason="", readiness=false. Elapsed: 43.923443ms Mar 19 21:12:49.859: INFO: Pod "downwardapi-volume-561ef36f-2ea7-4557-ae62-3ede7971f64d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047454909s Mar 19 21:12:51.868: INFO: Pod "downwardapi-volume-561ef36f-2ea7-4557-ae62-3ede7971f64d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056653101s STEP: Saw pod success Mar 19 21:12:51.868: INFO: Pod "downwardapi-volume-561ef36f-2ea7-4557-ae62-3ede7971f64d" satisfied condition "success or failure" Mar 19 21:12:51.871: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-561ef36f-2ea7-4557-ae62-3ede7971f64d container client-container: STEP: delete the pod Mar 19 21:12:51.890: INFO: Waiting for pod downwardapi-volume-561ef36f-2ea7-4557-ae62-3ede7971f64d to disappear Mar 19 21:12:51.894: INFO: Pod downwardapi-volume-561ef36f-2ea7-4557-ae62-3ede7971f64d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:12:51.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4718" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":355,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:12:51.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8722.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8722.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 19 21:12:58.016: INFO: DNS probes using dns-8722/dns-test-10a6653a-e313-4402-9025-1f64f697da10 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:12:58.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8722" for this suite. • [SLOW TEST:6.192 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":23,"skipped":385,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:12:58.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 19 21:12:59.041: INFO: Pod name wrapped-volume-race-f565ae8b-29b0-4c5a-bb96-95f783f66b23: Found 0 pods out of 5 Mar 19 21:13:04.050: INFO: Pod name wrapped-volume-race-f565ae8b-29b0-4c5a-bb96-95f783f66b23: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f565ae8b-29b0-4c5a-bb96-95f783f66b23 in namespace emptydir-wrapper-663, will wait for the garbage collector to delete the pods Mar 19 21:13:16.132: INFO: Deleting ReplicationController wrapped-volume-race-f565ae8b-29b0-4c5a-bb96-95f783f66b23 took: 6.124129ms Mar 19 21:13:16.532: INFO: Terminating ReplicationController wrapped-volume-race-f565ae8b-29b0-4c5a-bb96-95f783f66b23 pods took: 400.303229ms STEP: Creating RC which spawns configmap-volume pods Mar 19 21:13:30.562: INFO: Pod name wrapped-volume-race-7ba92815-3694-4059-823a-7c163b019271: Found 0 pods out of 5 Mar 19 21:13:35.586: INFO: Pod name wrapped-volume-race-7ba92815-3694-4059-823a-7c163b019271: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7ba92815-3694-4059-823a-7c163b019271 in namespace emptydir-wrapper-663, will wait for the garbage collector to delete the pods Mar 19 21:13:49.688: INFO: Deleting ReplicationController wrapped-volume-race-7ba92815-3694-4059-823a-7c163b019271 took: 7.082359ms Mar 19 21:13:49.988: INFO: Terminating ReplicationController wrapped-volume-race-7ba92815-3694-4059-823a-7c163b019271 pods took: 300.345224ms STEP: Creating RC which spawns configmap-volume pods Mar 19 21:14:00.514: INFO: Pod name wrapped-volume-race-407a326e-3259-4395-a650-58363a3d4e3c: Found 0 pods out of 5 Mar 19 21:14:05.522: INFO: Pod name wrapped-volume-race-407a326e-3259-4395-a650-58363a3d4e3c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-407a326e-3259-4395-a650-58363a3d4e3c in namespace emptydir-wrapper-663, will wait for the garbage collector to delete the pods Mar 19 21:14:19.610: INFO: Deleting ReplicationController wrapped-volume-race-407a326e-3259-4395-a650-58363a3d4e3c took: 7.457126ms Mar 19 21:14:19.911: INFO: Terminating ReplicationController wrapped-volume-race-407a326e-3259-4395-a650-58363a3d4e3c pods took: 300.310799ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:14:30.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-663" for this suite. • [SLOW TEST:92.094 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":24,"skipped":402,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:14:30.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 19 21:14:35.315: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:14:35.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-532" for this suite. • [SLOW TEST:5.286 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":25,"skipped":435,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:14:35.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-1626 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1626 STEP: Deleting pre-stop pod Mar 19 21:14:50.718: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:14:50.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1626" for this suite. • [SLOW TEST:15.258 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":26,"skipped":469,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:14:50.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 19 21:14:50.784: INFO: PodSpec: initContainers in spec.initContainers Mar 19 21:15:40.208: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d32d7096-0251-4d89-b2f7-d8dce39a9b67", GenerateName:"", Namespace:"init-container-6646", SelfLink:"/api/v1/namespaces/init-container-6646/pods/pod-init-d32d7096-0251-4d89-b2f7-d8dce39a9b67", UID:"8160f467-0d1a-4cc7-81bc-f671b32e7f37", ResourceVersion:"1111329", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720249290, loc:(*time.Location)(0x7d83a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"784893882"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6tthk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc003070000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6tthk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6tthk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6tthk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00575a068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc004f80000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00575a0f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00575a110)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00575a118), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00575a11c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720249291, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720249291, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720249291, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720249290, loc:(*time.Location)(0x7d83a80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.207", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.207"}}, StartTime:(*v1.Time)(0xc002162040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0019f2070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0019f20e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://a0dd9e7d5332be4b626fbf580074f3cc701ce8c7c65f3c8628e950821e5410c6", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002162080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002162060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00575a19f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:15:40.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6646" for this suite. • [SLOW TEST:49.550 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":27,"skipped":485,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:15:40.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 19 21:15:40.888: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 19 21:15:42.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720249340, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720249340, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720249340, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720249340, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 21:15:45.982: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:15:45.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9082-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:15:47.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2424" for this suite. STEP: Destroying namespace "webhook-2424-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.006 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":28,"skipped":507,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:15:47.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:16:03.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6157" for this suite. • [SLOW TEST:16.247 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":29,"skipped":533,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:16:03.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 19 21:16:03.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4906' Mar 19 21:16:03.845: INFO: stderr: "" Mar 19 21:16:03.845: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 19 21:16:03.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4906' Mar 19 21:16:03.948: INFO: stderr: "" Mar 19 21:16:03.948: INFO: stdout: "update-demo-nautilus-g8c4h update-demo-nautilus-kjz4l " Mar 19 21:16:03.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g8c4h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4906' Mar 19 21:16:04.045: INFO: stderr: "" Mar 19 21:16:04.045: INFO: stdout: "" Mar 19 21:16:04.045: INFO: update-demo-nautilus-g8c4h is created but not running Mar 19 21:16:09.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4906' Mar 19 21:16:09.141: INFO: stderr: "" Mar 19 21:16:09.141: INFO: stdout: "update-demo-nautilus-g8c4h update-demo-nautilus-kjz4l " Mar 19 21:16:09.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g8c4h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4906' Mar 19 21:16:09.222: INFO: stderr: "" Mar 19 21:16:09.222: INFO: stdout: "true" Mar 19 21:16:09.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g8c4h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4906' Mar 19 21:16:09.304: INFO: stderr: "" Mar 19 21:16:09.304: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 19 21:16:09.304: INFO: validating pod update-demo-nautilus-g8c4h Mar 19 21:16:09.308: INFO: got data: { "image": "nautilus.jpg" } Mar 19 21:16:09.308: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 19 21:16:09.308: INFO: update-demo-nautilus-g8c4h is verified up and running Mar 19 21:16:09.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kjz4l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4906' Mar 19 21:16:09.399: INFO: stderr: "" Mar 19 21:16:09.399: INFO: stdout: "true" Mar 19 21:16:09.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kjz4l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4906' Mar 19 21:16:09.495: INFO: stderr: "" Mar 19 21:16:09.495: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 19 21:16:09.495: INFO: validating pod update-demo-nautilus-kjz4l Mar 19 21:16:09.498: INFO: got data: { "image": "nautilus.jpg" } Mar 19 21:16:09.498: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 19 21:16:09.498: INFO: update-demo-nautilus-kjz4l is verified up and running STEP: scaling down the replication controller Mar 19 21:16:09.502: INFO: scanned /root for discovery docs: Mar 19 21:16:09.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4906' Mar 19 21:16:10.636: INFO: stderr: "" Mar 19 21:16:10.636: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 19 21:16:10.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4906' Mar 19 21:16:10.732: INFO: stderr: "" Mar 19 21:16:10.732: INFO: stdout: "update-demo-nautilus-g8c4h update-demo-nautilus-kjz4l " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 19 21:16:15.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4906' Mar 19 21:16:15.842: INFO: stderr: "" Mar 19 21:16:15.842: INFO: stdout: "update-demo-nautilus-g8c4h update-demo-nautilus-kjz4l " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 19 21:16:20.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4906' Mar 19 21:16:20.949: INFO: stderr: "" Mar 19 21:16:20.949: INFO: stdout: "update-demo-nautilus-g8c4h " Mar 19 21:16:20.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g8c4h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4906' Mar 19 21:16:21.037: INFO: stderr: "" Mar 19 21:16:21.037: INFO: stdout: "true" Mar 19 21:16:21.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g8c4h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4906' Mar 19 21:16:21.128: INFO: stderr: "" Mar 19 21:16:21.128: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 19 21:16:21.128: INFO: validating pod update-demo-nautilus-g8c4h Mar 19 21:16:21.131: INFO: got data: { "image": "nautilus.jpg" } Mar 19 21:16:21.132: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 19 21:16:21.132: INFO: update-demo-nautilus-g8c4h is verified up and running STEP: scaling up the replication controller Mar 19 21:16:21.134: INFO: scanned /root for discovery docs: Mar 19 21:16:21.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4906' Mar 19 21:16:22.264: INFO: stderr: "" Mar 19 21:16:22.264: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 19 21:16:22.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4906' Mar 19 21:16:22.362: INFO: stderr: "" Mar 19 21:16:22.362: INFO: stdout: "update-demo-nautilus-g8c4h update-demo-nautilus-t8grd " Mar 19 21:16:22.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g8c4h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4906' Mar 19 21:16:22.459: INFO: stderr: "" Mar 19 21:16:22.459: INFO: stdout: "true" Mar 19 21:16:22.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g8c4h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4906' Mar 19 21:16:22.545: INFO: stderr: "" Mar 19 21:16:22.545: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 19 21:16:22.545: INFO: validating pod update-demo-nautilus-g8c4h Mar 19 21:16:22.547: INFO: got data: { "image": "nautilus.jpg" } Mar 19 21:16:22.548: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 19 21:16:22.548: INFO: update-demo-nautilus-g8c4h is verified up and running Mar 19 21:16:22.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t8grd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4906' Mar 19 21:16:22.773: INFO: stderr: "" Mar 19 21:16:22.773: INFO: stdout: "" Mar 19 21:16:22.773: INFO: update-demo-nautilus-t8grd is created but not running Mar 19 21:16:27.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4906' Mar 19 21:16:27.863: INFO: stderr: "" Mar 19 21:16:27.863: INFO: stdout: "update-demo-nautilus-g8c4h update-demo-nautilus-t8grd " Mar 19 21:16:27.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g8c4h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4906' Mar 19 21:16:27.950: INFO: stderr: "" Mar 19 21:16:27.950: INFO: stdout: "true" Mar 19 21:16:27.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g8c4h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4906' Mar 19 21:16:28.042: INFO: stderr: "" Mar 19 21:16:28.042: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 19 21:16:28.042: INFO: validating pod update-demo-nautilus-g8c4h Mar 19 21:16:28.045: INFO: got data: { "image": "nautilus.jpg" } Mar 19 21:16:28.046: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 19 21:16:28.046: INFO: update-demo-nautilus-g8c4h is verified up and running Mar 19 21:16:28.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t8grd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4906' Mar 19 21:16:28.139: INFO: stderr: "" Mar 19 21:16:28.139: INFO: stdout: "true" Mar 19 21:16:28.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t8grd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4906' Mar 19 21:16:28.278: INFO: stderr: "" Mar 19 21:16:28.278: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 19 21:16:28.278: INFO: validating pod update-demo-nautilus-t8grd Mar 19 21:16:28.282: INFO: got data: { "image": "nautilus.jpg" } Mar 19 21:16:28.282: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 19 21:16:28.282: INFO: update-demo-nautilus-t8grd is verified up and running STEP: using delete to clean up resources Mar 19 21:16:28.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4906' Mar 19 21:16:28.380: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 19 21:16:28.380: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 19 21:16:28.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4906' Mar 19 21:16:28.506: INFO: stderr: "No resources found in kubectl-4906 namespace.\n" Mar 19 21:16:28.506: INFO: stdout: "" Mar 19 21:16:28.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4906 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 19 21:16:28.611: INFO: stderr: "" Mar 19 21:16:28.611: INFO: stdout: "update-demo-nautilus-g8c4h\nupdate-demo-nautilus-t8grd\n" Mar 19 21:16:29.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4906' Mar 19 21:16:29.223: INFO: stderr: "No resources found in kubectl-4906 namespace.\n" Mar 19 21:16:29.223: INFO: stdout: "" Mar 19 21:16:29.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4906 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 19 21:16:29.323: INFO: stderr: "" Mar 19 21:16:29.323: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:16:29.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4906" for this suite. • [SLOW TEST:25.791 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":30,"skipped":535,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:16:29.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-7f6b45ce-f105-4c4e-b88b-e24672707f4d STEP: Creating a pod to test consume secrets Mar 19 21:16:29.594: INFO: Waiting up to 5m0s for pod "pod-secrets-13427bfa-63ca-4c84-b180-4883c668038b" in namespace "secrets-9476" to be "success or failure" Mar 19 21:16:29.614: INFO: Pod "pod-secrets-13427bfa-63ca-4c84-b180-4883c668038b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.144274ms Mar 19 21:16:31.647: INFO: Pod "pod-secrets-13427bfa-63ca-4c84-b180-4883c668038b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052342891s Mar 19 21:16:33.651: INFO: Pod "pod-secrets-13427bfa-63ca-4c84-b180-4883c668038b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056539978s STEP: Saw pod success Mar 19 21:16:33.651: INFO: Pod "pod-secrets-13427bfa-63ca-4c84-b180-4883c668038b" satisfied condition "success or failure" Mar 19 21:16:33.654: INFO: Trying to get logs from node jerma-worker pod pod-secrets-13427bfa-63ca-4c84-b180-4883c668038b container secret-volume-test: STEP: delete the pod Mar 19 21:16:33.690: INFO: Waiting for pod pod-secrets-13427bfa-63ca-4c84-b180-4883c668038b to disappear Mar 19 21:16:33.694: INFO: Pod pod-secrets-13427bfa-63ca-4c84-b180-4883c668038b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:16:33.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9476" for this suite. STEP: Destroying namespace "secret-namespace-4530" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":540,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:16:33.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:16:38.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8539" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":32,"skipped":573,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:16:38.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Mar 19 21:16:38.385: INFO: Waiting up to 5m0s for pod "var-expansion-eb5b2f21-2f24-4553-a216-26fffd37313b" in namespace "var-expansion-9341" to be "success or failure" Mar 19 21:16:38.389: INFO: Pod "var-expansion-eb5b2f21-2f24-4553-a216-26fffd37313b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.710363ms Mar 19 21:16:40.393: INFO: Pod "var-expansion-eb5b2f21-2f24-4553-a216-26fffd37313b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008038263s Mar 19 21:16:42.397: INFO: Pod "var-expansion-eb5b2f21-2f24-4553-a216-26fffd37313b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012042392s STEP: Saw pod success Mar 19 21:16:42.397: INFO: Pod "var-expansion-eb5b2f21-2f24-4553-a216-26fffd37313b" satisfied condition "success or failure" Mar 19 21:16:42.400: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-eb5b2f21-2f24-4553-a216-26fffd37313b container dapi-container: STEP: delete the pod Mar 19 21:16:42.462: INFO: Waiting for pod var-expansion-eb5b2f21-2f24-4553-a216-26fffd37313b to disappear Mar 19 21:16:42.473: INFO: Pod var-expansion-eb5b2f21-2f24-4553-a216-26fffd37313b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:16:42.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9341" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":574,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:16:42.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 19 21:16:43.147: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 19 21:16:45.157: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720249403, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720249403, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720249403, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720249403, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 21:16:48.206: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:16:48.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9169" for this suite. STEP: Destroying namespace "webhook-9169-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.815 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":34,"skipped":576,"failed":0} SSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:16:48.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Mar 19 21:16:48.351: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2234" to be "success or failure" Mar 19 21:16:48.366: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.699707ms Mar 19 21:16:50.370: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019009333s Mar 19 21:16:52.408: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056891524s STEP: Saw pod success Mar 19 21:16:52.408: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 19 21:16:52.411: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 19 21:16:52.450: INFO: Waiting for pod pod-host-path-test to disappear Mar 19 21:16:52.462: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:16:52.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-2234" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:16:52.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 19 21:16:52.560: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97d500c9-b8f5-4c65-80b8-c621ecb372cc" in namespace "downward-api-1059" to be "success or failure" Mar 19 21:16:52.570: INFO: Pod "downwardapi-volume-97d500c9-b8f5-4c65-80b8-c621ecb372cc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.517298ms Mar 19 21:16:54.574: INFO: Pod "downwardapi-volume-97d500c9-b8f5-4c65-80b8-c621ecb372cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013643122s Mar 19 21:16:56.578: INFO: Pod "downwardapi-volume-97d500c9-b8f5-4c65-80b8-c621ecb372cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017923436s STEP: Saw pod success Mar 19 21:16:56.578: INFO: Pod "downwardapi-volume-97d500c9-b8f5-4c65-80b8-c621ecb372cc" satisfied condition "success or failure" Mar 19 21:16:56.581: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-97d500c9-b8f5-4c65-80b8-c621ecb372cc container client-container: STEP: delete the pod Mar 19 21:16:56.617: INFO: Waiting for pod downwardapi-volume-97d500c9-b8f5-4c65-80b8-c621ecb372cc to disappear Mar 19 21:16:56.629: INFO: Pod downwardapi-volume-97d500c9-b8f5-4c65-80b8-c621ecb372cc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:16:56.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1059" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":612,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:16:56.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-6ffa82db-59c1-4d73-b31b-2aa38bb06bf8 STEP: Creating a pod to test consume secrets Mar 19 21:16:56.715: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ae80d749-9ef2-442e-a85f-9c7a7c037af6" in namespace "projected-1549" to be "success or failure" Mar 19 21:16:56.719: INFO: Pod "pod-projected-secrets-ae80d749-9ef2-442e-a85f-9c7a7c037af6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146076ms Mar 19 21:16:58.734: INFO: Pod "pod-projected-secrets-ae80d749-9ef2-442e-a85f-9c7a7c037af6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019525088s Mar 19 21:17:00.738: INFO: Pod "pod-projected-secrets-ae80d749-9ef2-442e-a85f-9c7a7c037af6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023516176s STEP: Saw pod success Mar 19 21:17:00.738: INFO: Pod "pod-projected-secrets-ae80d749-9ef2-442e-a85f-9c7a7c037af6" satisfied condition "success or failure" Mar 19 21:17:00.741: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-ae80d749-9ef2-442e-a85f-9c7a7c037af6 container secret-volume-test: STEP: delete the pod Mar 19 21:17:00.774: INFO: Waiting for pod pod-projected-secrets-ae80d749-9ef2-442e-a85f-9c7a7c037af6 to disappear Mar 19 21:17:00.785: INFO: Pod pod-projected-secrets-ae80d749-9ef2-442e-a85f-9c7a7c037af6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:17:00.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1549" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":625,"failed":0} ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:17:00.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2642 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 19 21:17:00.887: INFO: Found 0 stateful pods, waiting for 3 Mar 19 21:17:10.890: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 19 21:17:10.890: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 19 21:17:10.890: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Mar 19 21:17:20.891: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 19 21:17:20.891: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 19 21:17:20.891: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 19 21:17:20.919: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 19 21:17:30.956: INFO: Updating stateful set ss2 Mar 19 21:17:30.972: INFO: Waiting for Pod statefulset-2642/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 19 21:17:41.116: INFO: Found 2 stateful pods, waiting for 3 Mar 19 21:17:51.120: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 19 21:17:51.120: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 19 21:17:51.120: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 19 21:17:51.143: INFO: Updating stateful set ss2 Mar 19 21:17:51.194: INFO: Waiting for Pod statefulset-2642/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 19 21:18:01.220: INFO: Updating stateful set ss2 Mar 19 21:18:01.243: INFO: Waiting for StatefulSet statefulset-2642/ss2 to complete update Mar 19 21:18:01.243: INFO: Waiting for Pod statefulset-2642/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 19 21:18:11.250: INFO: Waiting for StatefulSet statefulset-2642/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 19 21:18:21.254: INFO: Deleting all statefulset in ns statefulset-2642 Mar 19 21:18:21.259: INFO: Scaling statefulset ss2 to 0 Mar 19 21:18:41.270: INFO: Waiting for statefulset status.replicas updated to 0 Mar 19 21:18:41.272: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:18:41.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2642" for this suite. • [SLOW TEST:100.504 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":38,"skipped":625,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:18:41.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 19 21:18:41.389: INFO: Waiting up to 5m0s for pod "downward-api-f4a5b6b3-9e66-4bdc-a91c-04f34b116050" in namespace "downward-api-1508" to be "success or failure" Mar 19 21:18:41.392: INFO: Pod "downward-api-f4a5b6b3-9e66-4bdc-a91c-04f34b116050": Phase="Pending", Reason="", readiness=false. Elapsed: 3.615531ms Mar 19 21:18:43.400: INFO: Pod "downward-api-f4a5b6b3-9e66-4bdc-a91c-04f34b116050": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011509314s Mar 19 21:18:45.405: INFO: Pod "downward-api-f4a5b6b3-9e66-4bdc-a91c-04f34b116050": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016277894s STEP: Saw pod success Mar 19 21:18:45.405: INFO: Pod "downward-api-f4a5b6b3-9e66-4bdc-a91c-04f34b116050" satisfied condition "success or failure" Mar 19 21:18:45.408: INFO: Trying to get logs from node jerma-worker pod downward-api-f4a5b6b3-9e66-4bdc-a91c-04f34b116050 container dapi-container: STEP: delete the pod Mar 19 21:18:45.476: INFO: Waiting for pod downward-api-f4a5b6b3-9e66-4bdc-a91c-04f34b116050 to disappear Mar 19 21:18:45.481: INFO: Pod downward-api-f4a5b6b3-9e66-4bdc-a91c-04f34b116050 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:18:45.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1508" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":668,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:18:45.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Mar 19 21:18:45.544: INFO: Waiting up to 5m0s for pod "client-containers-96aa56a2-19fc-4c48-8364-58467c4c8b6a" in namespace "containers-6036" to be "success or failure" Mar 19 21:18:45.547: INFO: Pod "client-containers-96aa56a2-19fc-4c48-8364-58467c4c8b6a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.394467ms Mar 19 21:18:47.643: INFO: Pod "client-containers-96aa56a2-19fc-4c48-8364-58467c4c8b6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098940525s Mar 19 21:18:49.646: INFO: Pod "client-containers-96aa56a2-19fc-4c48-8364-58467c4c8b6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102577688s STEP: Saw pod success Mar 19 21:18:49.646: INFO: Pod "client-containers-96aa56a2-19fc-4c48-8364-58467c4c8b6a" satisfied condition "success or failure" Mar 19 21:18:49.649: INFO: Trying to get logs from node jerma-worker pod client-containers-96aa56a2-19fc-4c48-8364-58467c4c8b6a container test-container: STEP: delete the pod Mar 19 21:18:49.681: INFO: Waiting for pod client-containers-96aa56a2-19fc-4c48-8364-58467c4c8b6a to disappear Mar 19 21:18:49.691: INFO: Pod client-containers-96aa56a2-19fc-4c48-8364-58467c4c8b6a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:18:49.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6036" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":693,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:18:49.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 19 21:18:52.804: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:18:52.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7154" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":707,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:18:52.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-8717bf77-8d17-43a1-bd06-2fc4e7e8131c STEP: Creating a pod to test consume secrets Mar 19 21:18:52.950: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1683d366-e4fb-4cfc-8bf8-58a811cd3fd0" in namespace "projected-8539" to be "success or failure" Mar 19 21:18:52.973: INFO: Pod "pod-projected-secrets-1683d366-e4fb-4cfc-8bf8-58a811cd3fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 23.583396ms Mar 19 21:18:55.038: INFO: Pod "pod-projected-secrets-1683d366-e4fb-4cfc-8bf8-58a811cd3fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088540648s Mar 19 21:18:57.056: INFO: Pod "pod-projected-secrets-1683d366-e4fb-4cfc-8bf8-58a811cd3fd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106079343s STEP: Saw pod success Mar 19 21:18:57.056: INFO: Pod "pod-projected-secrets-1683d366-e4fb-4cfc-8bf8-58a811cd3fd0" satisfied condition "success or failure" Mar 19 21:18:57.059: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-1683d366-e4fb-4cfc-8bf8-58a811cd3fd0 container projected-secret-volume-test: STEP: delete the pod Mar 19 21:18:57.095: INFO: Waiting for pod pod-projected-secrets-1683d366-e4fb-4cfc-8bf8-58a811cd3fd0 to disappear Mar 19 21:18:57.105: INFO: Pod pod-projected-secrets-1683d366-e4fb-4cfc-8bf8-58a811cd3fd0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:18:57.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8539" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":715,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:18:57.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-9140 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9140 to expose endpoints map[] Mar 19 21:18:57.225: INFO: successfully validated that service endpoint-test2 in namespace services-9140 exposes endpoints map[] (10.909931ms elapsed) STEP: Creating pod pod1 in namespace services-9140 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9140 to expose endpoints map[pod1:[80]] Mar 19 21:19:00.275: INFO: successfully validated that service endpoint-test2 in namespace services-9140 exposes endpoints map[pod1:[80]] (3.03560481s elapsed) STEP: Creating pod pod2 in namespace services-9140 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9140 to expose endpoints map[pod1:[80] pod2:[80]] Mar 19 21:19:03.358: INFO: successfully validated that service endpoint-test2 in namespace services-9140 exposes endpoints map[pod1:[80] pod2:[80]] (3.07880901s elapsed) STEP: Deleting pod pod1 in namespace services-9140 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9140 to expose endpoints map[pod2:[80]] Mar 19 21:19:04.407: INFO: successfully validated that service endpoint-test2 in namespace services-9140 exposes endpoints map[pod2:[80]] (1.045482434s elapsed) STEP: Deleting pod pod2 in namespace services-9140 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9140 to expose endpoints map[] Mar 19 21:19:05.577: INFO: successfully validated that service endpoint-test2 in namespace services-9140 exposes endpoints map[] (1.166568924s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:19:05.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9140" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:8.545 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":43,"skipped":728,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:19:05.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Mar 19 21:19:06.258: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix473646872/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:19:06.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5357" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":44,"skipped":767,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:19:06.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Mar 19 21:19:06.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2621 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 19 21:19:08.863: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0319 21:19:08.754402 772 log.go:172] (0xc000a6d340) (0xc000ace320) Create stream\nI0319 21:19:08.754463 772 log.go:172] (0xc000a6d340) (0xc000ace320) Stream added, broadcasting: 1\nI0319 21:19:08.756679 772 log.go:172] (0xc000a6d340) Reply frame received for 1\nI0319 21:19:08.756711 772 log.go:172] (0xc000a6d340) (0xc0005e5900) Create stream\nI0319 21:19:08.756718 772 log.go:172] (0xc000a6d340) (0xc0005e5900) Stream added, broadcasting: 3\nI0319 21:19:08.757495 772 log.go:172] (0xc000a6d340) Reply frame received for 3\nI0319 21:19:08.757519 772 log.go:172] (0xc000a6d340) (0xc000ace3c0) Create stream\nI0319 21:19:08.757525 772 log.go:172] (0xc000a6d340) (0xc000ace3c0) Stream added, broadcasting: 5\nI0319 21:19:08.758106 772 log.go:172] (0xc000a6d340) Reply frame received for 5\nI0319 21:19:08.758153 772 log.go:172] (0xc000a6d340) (0xc000850000) Create stream\nI0319 21:19:08.758178 772 log.go:172] (0xc000a6d340) (0xc000850000) Stream added, broadcasting: 7\nI0319 21:19:08.759200 772 log.go:172] (0xc000a6d340) Reply frame received for 7\nI0319 21:19:08.759305 772 log.go:172] (0xc0005e5900) (3) Writing data frame\nI0319 21:19:08.759411 772 log.go:172] (0xc0005e5900) (3) Writing data frame\nI0319 21:19:08.760097 772 log.go:172] (0xc000a6d340) Data frame received for 5\nI0319 21:19:08.760109 772 log.go:172] (0xc000ace3c0) (5) Data frame handling\nI0319 21:19:08.760115 772 log.go:172] (0xc000ace3c0) (5) Data frame sent\nI0319 21:19:08.760721 772 log.go:172] (0xc000a6d340) Data frame received for 5\nI0319 21:19:08.760749 772 log.go:172] (0xc000ace3c0) (5) Data frame handling\nI0319 21:19:08.760764 772 log.go:172] (0xc000ace3c0) (5) Data frame sent\nI0319 21:19:08.803694 772 log.go:172] (0xc000a6d340) Data frame received for 5\nI0319 21:19:08.803743 772 log.go:172] (0xc000ace3c0) (5) Data frame handling\nI0319 21:19:08.803773 772 log.go:172] (0xc000a6d340) Data frame received for 7\nI0319 21:19:08.803803 772 log.go:172] (0xc000850000) (7) Data frame handling\nI0319 21:19:08.804189 772 log.go:172] (0xc000a6d340) Data frame received for 1\nI0319 21:19:08.804212 772 log.go:172] (0xc000ace320) (1) Data frame handling\nI0319 21:19:08.804234 772 log.go:172] (0xc000ace320) (1) Data frame sent\nI0319 21:19:08.804254 772 log.go:172] (0xc000a6d340) (0xc000ace320) Stream removed, broadcasting: 1\nI0319 21:19:08.804306 772 log.go:172] (0xc000a6d340) (0xc0005e5900) Stream removed, broadcasting: 3\nI0319 21:19:08.804448 772 log.go:172] (0xc000a6d340) Go away received\nI0319 21:19:08.804751 772 log.go:172] (0xc000a6d340) (0xc000ace320) Stream removed, broadcasting: 1\nI0319 21:19:08.804780 772 log.go:172] (0xc000a6d340) (0xc0005e5900) Stream removed, broadcasting: 3\nI0319 21:19:08.804793 772 log.go:172] (0xc000a6d340) (0xc000ace3c0) Stream removed, broadcasting: 5\nI0319 21:19:08.804805 772 log.go:172] (0xc000a6d340) (0xc000850000) Stream removed, broadcasting: 7\n" Mar 19 21:19:08.863: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:19:10.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2621" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":45,"skipped":775,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:19:10.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:19:10.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6914" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":46,"skipped":825,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:19:11.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6756 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-6756 Mar 19 21:19:11.134: INFO: Found 0 stateful pods, waiting for 1 Mar 19 21:19:21.138: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 19 21:19:21.160: INFO: Deleting all statefulset in ns statefulset-6756 Mar 19 21:19:21.180: INFO: Scaling statefulset ss to 0 Mar 19 21:19:31.243: INFO: Waiting for statefulset status.replicas updated to 0 Mar 19 21:19:31.246: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:19:31.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6756" for this suite. • [SLOW TEST:20.275 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":47,"skipped":826,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:19:31.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-666b81bb-18cf-440c-a3be-047fe1b22aa3 STEP: Creating a pod to test consume configMaps Mar 19 21:19:31.348: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b01e9e98-f58e-411a-b38a-709fa91ac8ad" in namespace "projected-6199" to be "success or failure" Mar 19 21:19:31.365: INFO: Pod "pod-projected-configmaps-b01e9e98-f58e-411a-b38a-709fa91ac8ad": Phase="Pending", Reason="", readiness=false. Elapsed: 16.167549ms Mar 19 21:19:33.368: INFO: Pod "pod-projected-configmaps-b01e9e98-f58e-411a-b38a-709fa91ac8ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019414437s Mar 19 21:19:35.372: INFO: Pod "pod-projected-configmaps-b01e9e98-f58e-411a-b38a-709fa91ac8ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023500747s STEP: Saw pod success Mar 19 21:19:35.372: INFO: Pod "pod-projected-configmaps-b01e9e98-f58e-411a-b38a-709fa91ac8ad" satisfied condition "success or failure" Mar 19 21:19:35.375: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-b01e9e98-f58e-411a-b38a-709fa91ac8ad container projected-configmap-volume-test: STEP: delete the pod Mar 19 21:19:35.395: INFO: Waiting for pod pod-projected-configmaps-b01e9e98-f58e-411a-b38a-709fa91ac8ad to disappear Mar 19 21:19:35.399: INFO: Pod pod-projected-configmaps-b01e9e98-f58e-411a-b38a-709fa91ac8ad no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:19:35.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6199" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":834,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:19:35.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-a5ed6c92-8a29-4006-a1da-f73c3a9ac248 STEP: Creating a pod to test consume configMaps Mar 19 21:19:35.491: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba312120-5f70-4f0a-baac-a86564df7871" in namespace "configmap-4737" to be "success or failure" Mar 19 21:19:35.509: INFO: Pod "pod-configmaps-ba312120-5f70-4f0a-baac-a86564df7871": Phase="Pending", Reason="", readiness=false. Elapsed: 17.20453ms Mar 19 21:19:37.513: INFO: Pod "pod-configmaps-ba312120-5f70-4f0a-baac-a86564df7871": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021813012s Mar 19 21:19:39.518: INFO: Pod "pod-configmaps-ba312120-5f70-4f0a-baac-a86564df7871": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026200924s STEP: Saw pod success Mar 19 21:19:39.518: INFO: Pod "pod-configmaps-ba312120-5f70-4f0a-baac-a86564df7871" satisfied condition "success or failure" Mar 19 21:19:39.520: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-ba312120-5f70-4f0a-baac-a86564df7871 container configmap-volume-test: STEP: delete the pod Mar 19 21:19:39.554: INFO: Waiting for pod pod-configmaps-ba312120-5f70-4f0a-baac-a86564df7871 to disappear Mar 19 21:19:39.567: INFO: Pod pod-configmaps-ba312120-5f70-4f0a-baac-a86564df7871 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:19:39.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4737" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":836,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:19:39.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 19 21:19:43.686: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 19 21:19:53.793: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:19:53.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1065" for this suite. • [SLOW TEST:14.226 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":50,"skipped":839,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:19:53.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Mar 19 21:19:53.870: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 19 21:19:53.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4402' Mar 19 21:19:54.139: INFO: stderr: "" Mar 19 21:19:54.139: INFO: stdout: "service/agnhost-slave created\n" Mar 19 21:19:54.139: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 19 21:19:54.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4402' Mar 19 21:19:54.416: INFO: stderr: "" Mar 19 21:19:54.416: INFO: stdout: "service/agnhost-master created\n" Mar 19 21:19:54.416: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 19 21:19:54.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4402' Mar 19 21:19:54.699: INFO: stderr: "" Mar 19 21:19:54.699: INFO: stdout: "service/frontend created\n" Mar 19 21:19:54.699: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 19 21:19:54.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4402' Mar 19 21:19:54.974: INFO: stderr: "" Mar 19 21:19:54.974: INFO: stdout: "deployment.apps/frontend created\n" Mar 19 21:19:54.974: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 19 21:19:54.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4402' Mar 19 21:19:55.291: INFO: stderr: "" Mar 19 21:19:55.291: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 19 21:19:55.291: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 19 21:19:55.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4402' Mar 19 21:19:55.557: INFO: stderr: "" Mar 19 21:19:55.557: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 19 21:19:55.557: INFO: Waiting for all frontend pods to be Running. Mar 19 21:20:05.608: INFO: Waiting for frontend to serve content. Mar 19 21:20:05.618: INFO: Trying to add a new entry to the guestbook. Mar 19 21:20:05.630: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 19 21:20:05.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4402' Mar 19 21:20:05.847: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 19 21:20:05.847: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 19 21:20:05.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4402' Mar 19 21:20:05.983: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 19 21:20:05.983: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 19 21:20:05.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4402' Mar 19 21:20:06.135: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 19 21:20:06.135: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 19 21:20:06.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4402' Mar 19 21:20:06.236: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 19 21:20:06.236: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 19 21:20:06.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4402' Mar 19 21:20:06.363: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 19 21:20:06.363: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 19 21:20:06.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4402' Mar 19 21:20:06.493: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 19 21:20:06.493: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:20:06.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4402" for this suite. • [SLOW TEST:12.705 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:386 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":51,"skipped":847,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:20:06.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 19 21:20:06.872: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2455 /api/v1/namespaces/watch-2455/configmaps/e2e-watch-test-label-changed 1c6b8ea3-5638-44fc-ad25-04242013c441 1113410 0 2020-03-19 21:20:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 19 21:20:06.872: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2455 /api/v1/namespaces/watch-2455/configmaps/e2e-watch-test-label-changed 1c6b8ea3-5638-44fc-ad25-04242013c441 1113411 0 2020-03-19 21:20:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 19 21:20:06.872: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2455 /api/v1/namespaces/watch-2455/configmaps/e2e-watch-test-label-changed 1c6b8ea3-5638-44fc-ad25-04242013c441 1113412 0 2020-03-19 21:20:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 19 21:20:16.913: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2455 /api/v1/namespaces/watch-2455/configmaps/e2e-watch-test-label-changed 1c6b8ea3-5638-44fc-ad25-04242013c441 1113494 0 2020-03-19 21:20:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 19 21:20:16.914: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2455 /api/v1/namespaces/watch-2455/configmaps/e2e-watch-test-label-changed 1c6b8ea3-5638-44fc-ad25-04242013c441 1113495 0 2020-03-19 21:20:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 19 21:20:16.914: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2455 /api/v1/namespaces/watch-2455/configmaps/e2e-watch-test-label-changed 1c6b8ea3-5638-44fc-ad25-04242013c441 1113496 0 2020-03-19 21:20:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:20:16.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2455" for this suite. • [SLOW TEST:10.411 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":52,"skipped":850,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:20:16.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 19 21:20:16.990: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 19 21:20:17.014: INFO: Waiting for terminating namespaces to be deleted... Mar 19 21:20:17.017: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 19 21:20:17.024: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 21:20:17.024: INFO: Container kindnet-cni ready: true, restart count 0 Mar 19 21:20:17.024: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 21:20:17.024: INFO: Container kube-proxy ready: true, restart count 0 Mar 19 21:20:17.024: INFO: frontend-6c5f89d5d4-9ndfl from kubectl-4402 started at 2020-03-19 21:19:55 +0000 UTC (1 container statuses recorded) Mar 19 21:20:17.024: INFO: Container guestbook-frontend ready: false, restart count 0 Mar 19 21:20:17.024: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 19 21:20:17.029: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 21:20:17.029: INFO: Container kindnet-cni ready: true, restart count 0 Mar 19 21:20:17.029: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 21:20:17.029: INFO: Container kube-proxy ready: true, restart count 0 Mar 19 21:20:17.029: INFO: frontend-6c5f89d5d4-m95p6 from kubectl-4402 started at 2020-03-19 21:19:55 +0000 UTC (1 container statuses recorded) Mar 19 21:20:17.029: INFO: Container guestbook-frontend ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4950cc25-b297-4b03-969f-21be47fa08b6 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-4950cc25-b297-4b03-969f-21be47fa08b6 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-4950cc25-b297-4b03-969f-21be47fa08b6 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:25:25.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6928" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.276 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":53,"skipped":854,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:25:25.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0319 21:26:06.194571 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 19 21:26:06.194: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:26:06.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-729" for this suite. • [SLOW TEST:41.004 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":54,"skipped":855,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:26:06.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:26:06.263: INFO: Creating deployment "webserver-deployment" Mar 19 21:26:06.266: INFO: Waiting for observed generation 1 Mar 19 21:26:08.284: INFO: Waiting for all required pods to come up Mar 19 21:26:08.289: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 19 21:26:16.299: INFO: Waiting for deployment "webserver-deployment" to complete Mar 19 21:26:16.398: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 19 21:26:16.465: INFO: Updating deployment webserver-deployment Mar 19 21:26:16.465: INFO: Waiting for observed generation 2 Mar 19 21:26:19.110: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 19 21:26:19.116: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 19 21:26:19.208: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 19 21:26:19.860: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 19 21:26:19.860: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 19 21:26:19.863: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 19 21:26:19.867: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 19 21:26:19.867: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 19 21:26:19.873: INFO: Updating deployment webserver-deployment Mar 19 21:26:19.873: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 19 21:26:20.368: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 19 21:26:22.812: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 19 21:26:23.549: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6093 /apis/apps/v1/namespaces/deployment-6093/deployments/webserver-deployment e0adc067-a0cd-494a-897d-6c2660db991c 1115027 3 2020-03-19 21:26:06 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f87e98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-19 21:26:19 +0000 UTC,LastTransitionTime:2020-03-19 21:26:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-19 21:26:20 +0000 UTC,LastTransitionTime:2020-03-19 21:26:06 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 19 21:26:23.611: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-6093 /apis/apps/v1/namespaces/deployment-6093/replicasets/webserver-deployment-c7997dcc8 ff85fef6-6220-428d-a165-6d0dc3f08815 1115023 3 2020-03-19 21:26:16 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment e0adc067-a0cd-494a-897d-6c2660db991c 0xc0006353e7 0xc0006353e8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000635478 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 19 21:26:23.611: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 19 21:26:23.611: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-6093 /apis/apps/v1/namespaces/deployment-6093/replicasets/webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 1115016 3 2020-03-19 21:26:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment e0adc067-a0cd-494a-897d-6c2660db991c 0xc000634f37 0xc000634f38}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0006352f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 19 21:26:24.026: INFO: Pod "webserver-deployment-595b5b9587-452kp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-452kp webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-452kp 702006e6-f413-4906-ac24-0d1f25a956ab 1115012 0 2020-03-19 21:26:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc001f22c77 0xc001f22c78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.027: INFO: Pod "webserver-deployment-595b5b9587-5gsvl" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5gsvl webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-5gsvl f52876bd-958c-499b-9004-924032cae456 1114762 0 2020-03-19 21:26:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc001f22d97 0xc001f22d98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.190,StartTime:2020-03-19 21:26:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-19 21:26:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cbf37f9007dc0dbc696bde97d83c764a192e6f69a94eebe87865747172869edb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.190,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.027: INFO: Pod "webserver-deployment-595b5b9587-6f6dp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6f6dp webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-6f6dp bd5ec6f5-8dcc-49ea-a2a4-0660c3b1d3bd 1115014 0 2020-03-19 21:26:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc001f230e7 0xc001f230e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.027: INFO: Pod "webserver-deployment-595b5b9587-8rmz6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8rmz6 webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-8rmz6 76f5ef3f-ec8b-4be1-91d6-dde049663302 1115013 0 2020-03-19 21:26:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc001f23217 0xc001f23218}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.027: INFO: Pod "webserver-deployment-595b5b9587-9798h" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9798h webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-9798h 063da9e2-c917-4882-8cb6-3ede2ac8cfc5 1114796 0 2020-03-19 21:26:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc001f23337 0xc001f23338}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.191,StartTime:2020-03-19 21:26:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-19 21:26:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://52e76d2b35b31490ad5fa7c0792315b6aaebdea960953571e7cd1db0262461f6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.191,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.027: INFO: Pod "webserver-deployment-595b5b9587-9dgzc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9dgzc webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-9dgzc 85720234-a287-4f50-8e6d-7add451ee681 1115011 0 2020-03-19 21:26:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc001f234b7 0xc001f234b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.027: INFO: Pod "webserver-deployment-595b5b9587-bll69" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bll69 webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-bll69 e9e2c576-f22c-4ed9-ac48-d3245d12e031 1114786 0 2020-03-19 21:26:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc001f235d7 0xc001f235d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.233,StartTime:2020-03-19 21:26:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-19 21:26:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a17f7a944bb7a86536df4afaeab575a2a6ae51b0507685bf0fc3315d3b39530b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.233,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.028: INFO: Pod "webserver-deployment-595b5b9587-fc4cj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fc4cj webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-fc4cj 932d30dd-cde5-4502-b5bf-c682217af374 1115053 0 2020-03-19 21:26:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc001f23757 0xc001f23758}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-19 21:26:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.028: INFO: Pod "webserver-deployment-595b5b9587-g6z4x" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g6z4x webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-g6z4x 891fc5f7-b63e-4d5a-b1a5-d57f251570ba 1114877 0 2020-03-19 21:26:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc001f238b7 0xc001f238b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.194,StartTime:2020-03-19 21:26:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-19 21:26:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bdb0e90e590c303a65d444d53378ba6d62f3e0ff1aad95ed56902c04f7a82ed2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.194,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.028: INFO: Pod "webserver-deployment-595b5b9587-gn8dq" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gn8dq webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-gn8dq 7d94617c-48e3-483e-9907-8433a7b7ff44 1114801 0 2020-03-19 21:26:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc001f23c47 0xc001f23c48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.232,StartTime:2020-03-19 21:26:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-19 21:26:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d18dc2860d9251e3baf10b0c9f30278a6c7f670eb3fa80cf9b14d07ab503b2d9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.232,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.028: INFO: Pod "webserver-deployment-595b5b9587-k2k95" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-k2k95 webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-k2k95 e3cedfe2-c907-4680-a49c-1c411257727a 1115040 0 2020-03-19 21:26:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc001f23dd7 0xc001f23dd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-19 21:26:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.028: INFO: Pod "webserver-deployment-595b5b9587-kp4k9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kp4k9 webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-kp4k9 6676e382-b772-46c6-a777-8f5d6b064b09 1115018 0 2020-03-19 21:26:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc001f23f37 0xc001f23f38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-19 21:26:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.028: INFO: Pod "webserver-deployment-595b5b9587-p2rwx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-p2rwx webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-p2rwx 4b990c0b-e364-406e-8cf0-5ff429f6b9d7 1115032 0 2020-03-19 21:26:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc00032e2d7 0xc00032e2d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-19 21:26:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.028: INFO: Pod "webserver-deployment-595b5b9587-s4zsf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-s4zsf webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-s4zsf 7d93f95f-c4d4-4cae-92b0-e4e4ab064f01 1114831 0 2020-03-19 21:26:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc00032e8c7 0xc00032e8c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.235,StartTime:2020-03-19 21:26:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-19 21:26:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7ebf14e31557732c898e58aa99923e61a5dec714d4305cd9dbed58a6dc544993,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.235,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.029: INFO: Pod "webserver-deployment-595b5b9587-sc62r" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sc62r webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-sc62r 420d2dfb-d7cc-4781-a3c4-748d34334c0f 1115069 0 2020-03-19 21:26:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc00032eb37 0xc00032eb38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-19 21:26:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.029: INFO: Pod "webserver-deployment-595b5b9587-ssp89" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ssp89 webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-ssp89 eaba6698-90dc-4243-be40-7fe645984022 1114835 0 2020-03-19 21:26:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc00032ee27 0xc00032ee28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.236,StartTime:2020-03-19 21:26:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-19 21:26:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://66417de13e95720f31062437f60d6103dcb0600e401a9012867ccdbadad3e6fd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.236,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.029: INFO: Pod "webserver-deployment-595b5b9587-t6tqz" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-t6tqz webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-t6tqz 812029f8-d5b2-4c60-8a32-29e1bbb73b25 1114793 0 2020-03-19 21:26:06 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc00032f337 0xc00032f338}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.234,StartTime:2020-03-19 21:26:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-19 21:26:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://15f97055931f55c0ca4617e3ce7ed73cd527c14807ac5e2d93e8d76c57e40eb0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.234,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.029: INFO: Pod "webserver-deployment-595b5b9587-xllz7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xllz7 webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-xllz7 5e6d9d56-0ab9-4077-a0ca-4403aa03c5c0 1115074 0 2020-03-19 21:26:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc00032f597 0xc00032f598}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-19 21:26:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.029: INFO: Pod "webserver-deployment-595b5b9587-znxn6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-znxn6 webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-znxn6 d6a78192-0171-4a3c-a757-fc6caa99f87d 1115086 0 2020-03-19 21:26:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc00032fa67 0xc00032fa68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-19 21:26:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.029: INFO: Pod "webserver-deployment-595b5b9587-zr78h" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zr78h webserver-deployment-595b5b9587- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-595b5b9587-zr78h b334eb92-534d-41c5-824d-b967ebb0019e 1115055 0 2020-03-19 21:26:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c9c8905c-fe96-4076-82a8-71f4b1e34a52 0xc00032ff67 0xc00032ff68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-19 21:26:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.030: INFO: Pod "webserver-deployment-c7997dcc8-5dn4x" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5dn4x webserver-deployment-c7997dcc8- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-c7997dcc8-5dn4x f8791c5c-7c07-426d-a0de-9e6df61cd7f6 1115078 0 2020-03-19 21:26:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ff85fef6-6220-428d-a165-6d0dc3f08815 0xc0004318a7 0xc0004318a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-19 21:26:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.030: INFO: Pod "webserver-deployment-c7997dcc8-76kzs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-76kzs webserver-deployment-c7997dcc8- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-c7997dcc8-76kzs e86b75e0-be0f-45f6-afb7-9ebe445fa10f 1114941 0 2020-03-19 21:26:16 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ff85fef6-6220-428d-a165-6d0dc3f08815 0xc000642c77 0xc000642c78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-19 21:26:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.030: INFO: Pod "webserver-deployment-c7997dcc8-77lr2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-77lr2 webserver-deployment-c7997dcc8- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-c7997dcc8-77lr2 323f444e-c46d-4a88-8416-7d009a23bdcc 1115094 0 2020-03-19 21:26:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ff85fef6-6220-428d-a165-6d0dc3f08815 0xc000643f47 0xc000643f48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-19 21:26:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.030: INFO: Pod "webserver-deployment-c7997dcc8-8dp45" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8dp45 webserver-deployment-c7997dcc8- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-c7997dcc8-8dp45 e6ab7ae3-644c-4d83-9cb1-0a66b61edc0f 1114953 0 2020-03-19 21:26:16 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ff85fef6-6220-428d-a165-6d0dc3f08815 0xc000c4a147 0xc000c4a148}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-19 21:26:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.030: INFO: Pod "webserver-deployment-c7997dcc8-cgkbw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cgkbw webserver-deployment-c7997dcc8- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-c7997dcc8-cgkbw 0daf430e-4099-4144-843e-13f4b75603eb 1114950 0 2020-03-19 21:26:17 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ff85fef6-6220-428d-a165-6d0dc3f08815 0xc000c4a2d7 0xc000c4a2d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-19 21:26:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.030: INFO: Pod "webserver-deployment-c7997dcc8-dcnkm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dcnkm webserver-deployment-c7997dcc8- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-c7997dcc8-dcnkm a7af535d-53df-4ab2-b368-c83c2aff190f 1114933 0 2020-03-19 21:26:16 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ff85fef6-6220-428d-a165-6d0dc3f08815 0xc000c4a457 0xc000c4a458}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-19 21:26:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.030: INFO: Pod "webserver-deployment-c7997dcc8-dvzj7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dvzj7 webserver-deployment-c7997dcc8- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-c7997dcc8-dvzj7 e3e87d17-f17d-429c-9343-baf0ee0398aa 1115081 0 2020-03-19 21:26:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ff85fef6-6220-428d-a165-6d0dc3f08815 0xc000c4a607 0xc000c4a608}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-19 21:26:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.031: INFO: Pod "webserver-deployment-c7997dcc8-lwqwc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lwqwc webserver-deployment-c7997dcc8- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-c7997dcc8-lwqwc 06baa1f6-3a01-4c5e-9ad0-ead8048b2223 1115019 0 2020-03-19 21:26:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ff85fef6-6220-428d-a165-6d0dc3f08815 0xc000c4a787 0xc000c4a788}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.031: INFO: Pod "webserver-deployment-c7997dcc8-pr5lm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pr5lm webserver-deployment-c7997dcc8- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-c7997dcc8-pr5lm ef89167c-da94-4dea-9e37-bb733c7d900a 1115025 0 2020-03-19 21:26:19 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ff85fef6-6220-428d-a165-6d0dc3f08815 0xc000c4a907 0xc000c4a908}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-19 21:26:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.031: INFO: Pod "webserver-deployment-c7997dcc8-trj5k" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-trj5k webserver-deployment-c7997dcc8- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-c7997dcc8-trj5k 62734276-e221-44da-acbc-aea63e349668 1115092 0 2020-03-19 21:26:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ff85fef6-6220-428d-a165-6d0dc3f08815 0xc000c4aaa7 0xc000c4aaa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-19 21:26:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.031: INFO: Pod "webserver-deployment-c7997dcc8-vtdvb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vtdvb webserver-deployment-c7997dcc8- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-c7997dcc8-vtdvb abc204a1-517f-4408-93b1-285fd3448dc9 1115065 0 2020-03-19 21:26:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ff85fef6-6220-428d-a165-6d0dc3f08815 0xc000c4ac27 0xc000c4ac28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-19 21:26:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.031: INFO: Pod "webserver-deployment-c7997dcc8-wl2sq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wl2sq webserver-deployment-c7997dcc8- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-c7997dcc8-wl2sq 85d2651e-b47e-4181-95a9-dce6b88b8d98 1115045 0 2020-03-19 21:26:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ff85fef6-6220-428d-a165-6d0dc3f08815 0xc000c4ada7 0xc000c4ada8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-19 21:26:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 19 21:26:24.031: INFO: Pod "webserver-deployment-c7997dcc8-xggc5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xggc5 webserver-deployment-c7997dcc8- deployment-6093 /api/v1/namespaces/deployment-6093/pods/webserver-deployment-c7997dcc8-xggc5 f0b195ef-496b-436e-a8b1-a00741c8286c 1114955 0 2020-03-19 21:26:17 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ff85fef6-6220-428d-a165-6d0dc3f08815 0xc000c4af27 0xc000c4af28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blrm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blrm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blrm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:26:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-19 21:26:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:26:24.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6093" for this suite. • [SLOW TEST:18.832 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":55,"skipped":866,"failed":0} SSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:26:25.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:26:59.015: INFO: Container started at 2020-03-19 21:26:34 +0000 UTC, pod became ready at 2020-03-19 21:26:58 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:26:59.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6992" for this suite. • [SLOW TEST:33.989 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":869,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:26:59.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5955 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 19 21:26:59.082: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 19 21:27:25.214: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.210:8080/dial?request=hostname&protocol=http&host=10.244.1.209&port=8080&tries=1'] Namespace:pod-network-test-5955 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 21:27:25.214: INFO: >>> kubeConfig: /root/.kube/config I0319 21:27:25.243306 6 log.go:172] (0xc002a18370) (0xc0029c3180) Create stream I0319 21:27:25.243341 6 log.go:172] (0xc002a18370) (0xc0029c3180) Stream added, broadcasting: 1 I0319 21:27:25.245258 6 log.go:172] (0xc002a18370) Reply frame received for 1 I0319 21:27:25.245308 6 log.go:172] (0xc002a18370) (0xc001ef8000) Create stream I0319 21:27:25.245321 6 log.go:172] (0xc002a18370) (0xc001ef8000) Stream added, broadcasting: 3 I0319 21:27:25.246173 6 log.go:172] (0xc002a18370) Reply frame received for 3 I0319 21:27:25.246208 6 log.go:172] (0xc002a18370) (0xc0012f01e0) Create stream I0319 21:27:25.246222 6 log.go:172] (0xc002a18370) (0xc0012f01e0) Stream added, broadcasting: 5 I0319 21:27:25.247078 6 log.go:172] (0xc002a18370) Reply frame received for 5 I0319 21:27:25.324355 6 log.go:172] (0xc002a18370) Data frame received for 3 I0319 21:27:25.324390 6 log.go:172] (0xc001ef8000) (3) Data frame handling I0319 21:27:25.324411 6 log.go:172] (0xc001ef8000) (3) Data frame sent I0319 21:27:25.324683 6 log.go:172] (0xc002a18370) Data frame received for 5 I0319 21:27:25.324724 6 log.go:172] (0xc0012f01e0) (5) Data frame handling I0319 21:27:25.324904 6 log.go:172] (0xc002a18370) Data frame received for 3 I0319 21:27:25.324934 6 log.go:172] (0xc001ef8000) (3) Data frame handling I0319 21:27:25.326856 6 log.go:172] (0xc002a18370) Data frame received for 1 I0319 21:27:25.326874 6 log.go:172] (0xc0029c3180) (1) Data frame handling I0319 21:27:25.326885 6 log.go:172] (0xc0029c3180) (1) Data frame sent I0319 21:27:25.326898 6 log.go:172] (0xc002a18370) (0xc0029c3180) Stream removed, broadcasting: 1 I0319 21:27:25.326912 6 log.go:172] (0xc002a18370) Go away received I0319 21:27:25.327396 6 log.go:172] (0xc002a18370) (0xc0029c3180) Stream removed, broadcasting: 1 I0319 21:27:25.327428 6 log.go:172] (0xc002a18370) (0xc001ef8000) Stream removed, broadcasting: 3 I0319 21:27:25.327444 6 log.go:172] (0xc002a18370) (0xc0012f01e0) Stream removed, broadcasting: 5 Mar 19 21:27:25.327: INFO: Waiting for responses: map[] Mar 19 21:27:25.330: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.210:8080/dial?request=hostname&protocol=http&host=10.244.2.249&port=8080&tries=1'] Namespace:pod-network-test-5955 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 21:27:25.330: INFO: >>> kubeConfig: /root/.kube/config I0319 21:27:25.366793 6 log.go:172] (0xc0017384d0) (0xc001e19680) Create stream I0319 21:27:25.366818 6 log.go:172] (0xc0017384d0) (0xc001e19680) Stream added, broadcasting: 1 I0319 21:27:25.368607 6 log.go:172] (0xc0017384d0) Reply frame received for 1 I0319 21:27:25.368627 6 log.go:172] (0xc0017384d0) (0xc0029c3220) Create stream I0319 21:27:25.368633 6 log.go:172] (0xc0017384d0) (0xc0029c3220) Stream added, broadcasting: 3 I0319 21:27:25.369650 6 log.go:172] (0xc0017384d0) Reply frame received for 3 I0319 21:27:25.369712 6 log.go:172] (0xc0017384d0) (0xc001ef8500) Create stream I0319 21:27:25.369729 6 log.go:172] (0xc0017384d0) (0xc001ef8500) Stream added, broadcasting: 5 I0319 21:27:25.370567 6 log.go:172] (0xc0017384d0) Reply frame received for 5 I0319 21:27:25.435309 6 log.go:172] (0xc0017384d0) Data frame received for 3 I0319 21:27:25.435339 6 log.go:172] (0xc0029c3220) (3) Data frame handling I0319 21:27:25.435358 6 log.go:172] (0xc0029c3220) (3) Data frame sent I0319 21:27:25.436373 6 log.go:172] (0xc0017384d0) Data frame received for 5 I0319 21:27:25.436447 6 log.go:172] (0xc001ef8500) (5) Data frame handling I0319 21:27:25.436576 6 log.go:172] (0xc0017384d0) Data frame received for 3 I0319 21:27:25.436624 6 log.go:172] (0xc0029c3220) (3) Data frame handling I0319 21:27:25.438262 6 log.go:172] (0xc0017384d0) Data frame received for 1 I0319 21:27:25.438294 6 log.go:172] (0xc001e19680) (1) Data frame handling I0319 21:27:25.438319 6 log.go:172] (0xc001e19680) (1) Data frame sent I0319 21:27:25.438337 6 log.go:172] (0xc0017384d0) (0xc001e19680) Stream removed, broadcasting: 1 I0319 21:27:25.438367 6 log.go:172] (0xc0017384d0) Go away received I0319 21:27:25.438540 6 log.go:172] (0xc0017384d0) (0xc001e19680) Stream removed, broadcasting: 1 I0319 21:27:25.438565 6 log.go:172] (0xc0017384d0) (0xc0029c3220) Stream removed, broadcasting: 3 I0319 21:27:25.438578 6 log.go:172] (0xc0017384d0) (0xc001ef8500) Stream removed, broadcasting: 5 Mar 19 21:27:25.438: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:27:25.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5955" for this suite. • [SLOW TEST:26.423 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":880,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:27:25.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 19 21:27:29.567: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:27:29.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9055" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":889,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:27:29.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1855 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1855;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1855 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1855;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1855.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1855.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1855.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1855.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1855.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1855.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1855.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1855.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1855.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1855.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1855.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1855.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1855.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 34.141.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.141.34_udp@PTR;check="$$(dig +tcp +noall +answer +search 34.141.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.141.34_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1855 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1855;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1855 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1855;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1855.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1855.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1855.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1855.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1855.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1855.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1855.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1855.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1855.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1855.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1855.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1855.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1855.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 34.141.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.141.34_udp@PTR;check="$$(dig +tcp +noall +answer +search 34.141.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.141.34_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 19 21:27:37.830: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:37.833: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:37.837: INFO: Unable to read wheezy_udp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:37.841: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:37.844: INFO: Unable to read wheezy_udp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:37.847: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:37.850: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:37.853: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:37.871: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:37.875: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:37.878: INFO: Unable to read jessie_udp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:37.880: INFO: Unable to read jessie_tcp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:37.883: INFO: Unable to read jessie_udp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:37.886: INFO: Unable to read jessie_tcp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:37.888: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:37.891: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:37.909: INFO: Lookups using dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1855 wheezy_tcp@dns-test-service.dns-1855 wheezy_udp@dns-test-service.dns-1855.svc wheezy_tcp@dns-test-service.dns-1855.svc wheezy_udp@_http._tcp.dns-test-service.dns-1855.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1855.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1855 jessie_tcp@dns-test-service.dns-1855 jessie_udp@dns-test-service.dns-1855.svc jessie_tcp@dns-test-service.dns-1855.svc jessie_udp@_http._tcp.dns-test-service.dns-1855.svc jessie_tcp@_http._tcp.dns-test-service.dns-1855.svc] Mar 19 21:27:42.914: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:42.921: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:42.925: INFO: Unable to read wheezy_udp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:42.928: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:42.930: INFO: Unable to read wheezy_udp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:42.932: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:42.935: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:42.937: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:42.957: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:42.960: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:42.963: INFO: Unable to read jessie_udp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:42.966: INFO: Unable to read jessie_tcp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:42.969: INFO: Unable to read jessie_udp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:42.972: INFO: Unable to read jessie_tcp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:42.975: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:42.978: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:42.998: INFO: Lookups using dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1855 wheezy_tcp@dns-test-service.dns-1855 wheezy_udp@dns-test-service.dns-1855.svc wheezy_tcp@dns-test-service.dns-1855.svc wheezy_udp@_http._tcp.dns-test-service.dns-1855.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1855.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1855 jessie_tcp@dns-test-service.dns-1855 jessie_udp@dns-test-service.dns-1855.svc jessie_tcp@dns-test-service.dns-1855.svc jessie_udp@_http._tcp.dns-test-service.dns-1855.svc jessie_tcp@_http._tcp.dns-test-service.dns-1855.svc] Mar 19 21:27:47.915: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:47.918: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:47.922: INFO: Unable to read wheezy_udp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:47.925: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:47.928: INFO: Unable to read wheezy_udp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:47.931: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:47.934: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:47.937: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:47.955: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:47.957: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:47.960: INFO: Unable to read jessie_udp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:47.962: INFO: Unable to read jessie_tcp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:47.965: INFO: Unable to read jessie_udp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:47.967: INFO: Unable to read jessie_tcp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:47.970: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:47.973: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:47.991: INFO: Lookups using dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1855 wheezy_tcp@dns-test-service.dns-1855 wheezy_udp@dns-test-service.dns-1855.svc wheezy_tcp@dns-test-service.dns-1855.svc wheezy_udp@_http._tcp.dns-test-service.dns-1855.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1855.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1855 jessie_tcp@dns-test-service.dns-1855 jessie_udp@dns-test-service.dns-1855.svc jessie_tcp@dns-test-service.dns-1855.svc jessie_udp@_http._tcp.dns-test-service.dns-1855.svc jessie_tcp@_http._tcp.dns-test-service.dns-1855.svc] Mar 19 21:27:52.914: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:52.918: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:52.921: INFO: Unable to read wheezy_udp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:52.925: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:52.929: INFO: Unable to read wheezy_udp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:52.932: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:52.936: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:52.939: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:52.963: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:52.966: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:52.969: INFO: Unable to read jessie_udp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:52.971: INFO: Unable to read jessie_tcp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:52.974: INFO: Unable to read jessie_udp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:52.980: INFO: Unable to read jessie_tcp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:52.984: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:52.987: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:53.002: INFO: Lookups using dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1855 wheezy_tcp@dns-test-service.dns-1855 wheezy_udp@dns-test-service.dns-1855.svc wheezy_tcp@dns-test-service.dns-1855.svc wheezy_udp@_http._tcp.dns-test-service.dns-1855.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1855.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1855 jessie_tcp@dns-test-service.dns-1855 jessie_udp@dns-test-service.dns-1855.svc jessie_tcp@dns-test-service.dns-1855.svc jessie_udp@_http._tcp.dns-test-service.dns-1855.svc jessie_tcp@_http._tcp.dns-test-service.dns-1855.svc] Mar 19 21:27:57.914: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:57.918: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:57.922: INFO: Unable to read wheezy_udp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:57.925: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:57.928: INFO: Unable to read wheezy_udp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:57.930: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:57.934: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:57.936: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:57.958: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:57.960: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:57.963: INFO: Unable to read jessie_udp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:57.965: INFO: Unable to read jessie_tcp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:57.968: INFO: Unable to read jessie_udp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:57.970: INFO: Unable to read jessie_tcp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:57.972: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:57.975: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:27:57.992: INFO: Lookups using dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1855 wheezy_tcp@dns-test-service.dns-1855 wheezy_udp@dns-test-service.dns-1855.svc wheezy_tcp@dns-test-service.dns-1855.svc wheezy_udp@_http._tcp.dns-test-service.dns-1855.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1855.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1855 jessie_tcp@dns-test-service.dns-1855 jessie_udp@dns-test-service.dns-1855.svc jessie_tcp@dns-test-service.dns-1855.svc jessie_udp@_http._tcp.dns-test-service.dns-1855.svc jessie_tcp@_http._tcp.dns-test-service.dns-1855.svc] Mar 19 21:28:02.914: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:28:02.918: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:28:02.922: INFO: Unable to read wheezy_udp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:28:02.925: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:28:02.929: INFO: Unable to read wheezy_udp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:28:02.932: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:28:02.935: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:28:02.938: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:28:02.960: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:28:02.963: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:28:02.966: INFO: Unable to read jessie_udp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:28:02.970: INFO: Unable to read jessie_tcp@dns-test-service.dns-1855 from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:28:02.972: INFO: Unable to read jessie_udp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:28:02.975: INFO: Unable to read jessie_tcp@dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:28:02.979: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:28:02.982: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1855.svc from pod dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0: the server could not find the requested resource (get pods dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0) Mar 19 21:28:03.002: INFO: Lookups using dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1855 wheezy_tcp@dns-test-service.dns-1855 wheezy_udp@dns-test-service.dns-1855.svc wheezy_tcp@dns-test-service.dns-1855.svc wheezy_udp@_http._tcp.dns-test-service.dns-1855.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1855.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1855 jessie_tcp@dns-test-service.dns-1855 jessie_udp@dns-test-service.dns-1855.svc jessie_tcp@dns-test-service.dns-1855.svc jessie_udp@_http._tcp.dns-test-service.dns-1855.svc jessie_tcp@_http._tcp.dns-test-service.dns-1855.svc] Mar 19 21:28:08.013: INFO: DNS probes using dns-1855/dns-test-fed9e8dd-abe9-416c-9615-12fc6555b6f0 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:28:08.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1855" for this suite. • [SLOW TEST:38.965 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":59,"skipped":901,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:28:08.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5269 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-5269 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5269 Mar 19 21:28:08.791: INFO: Found 0 stateful pods, waiting for 1 Mar 19 21:28:18.842: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 19 21:28:18.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5269 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 19 21:28:21.839: INFO: stderr: "I0319 21:28:21.712544 1059 log.go:172] (0xc000928bb0) (0xc0006a1f40) Create stream\nI0319 21:28:21.712581 1059 log.go:172] (0xc000928bb0) (0xc0006a1f40) Stream added, broadcasting: 1\nI0319 21:28:21.719457 1059 log.go:172] (0xc000928bb0) Reply frame received for 1\nI0319 21:28:21.719501 1059 log.go:172] (0xc000928bb0) (0xc0006066e0) Create stream\nI0319 21:28:21.719512 1059 log.go:172] (0xc000928bb0) (0xc0006066e0) Stream added, broadcasting: 3\nI0319 21:28:21.720634 1059 log.go:172] (0xc000928bb0) Reply frame received for 3\nI0319 21:28:21.720680 1059 log.go:172] (0xc000928bb0) (0xc0003b94a0) Create stream\nI0319 21:28:21.720696 1059 log.go:172] (0xc000928bb0) (0xc0003b94a0) Stream added, broadcasting: 5\nI0319 21:28:21.721812 1059 log.go:172] (0xc000928bb0) Reply frame received for 5\nI0319 21:28:21.806285 1059 log.go:172] (0xc000928bb0) Data frame received for 5\nI0319 21:28:21.806314 1059 log.go:172] (0xc0003b94a0) (5) Data frame handling\nI0319 21:28:21.806336 1059 log.go:172] (0xc0003b94a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0319 21:28:21.831610 1059 log.go:172] (0xc000928bb0) Data frame received for 3\nI0319 21:28:21.831635 1059 log.go:172] (0xc0006066e0) (3) Data frame handling\nI0319 21:28:21.831650 1059 log.go:172] (0xc0006066e0) (3) Data frame sent\nI0319 21:28:21.831788 1059 log.go:172] (0xc000928bb0) Data frame received for 3\nI0319 21:28:21.831809 1059 log.go:172] (0xc0006066e0) (3) Data frame handling\nI0319 21:28:21.831868 1059 log.go:172] (0xc000928bb0) Data frame received for 5\nI0319 21:28:21.831885 1059 log.go:172] (0xc0003b94a0) (5) Data frame handling\nI0319 21:28:21.834198 1059 log.go:172] (0xc000928bb0) Data frame received for 1\nI0319 21:28:21.834246 1059 log.go:172] (0xc0006a1f40) (1) Data frame handling\nI0319 21:28:21.834272 1059 log.go:172] (0xc0006a1f40) (1) Data frame sent\nI0319 21:28:21.834294 1059 log.go:172] (0xc000928bb0) (0xc0006a1f40) Stream removed, broadcasting: 1\nI0319 21:28:21.834317 1059 log.go:172] (0xc000928bb0) Go away received\nI0319 21:28:21.834739 1059 log.go:172] (0xc000928bb0) (0xc0006a1f40) Stream removed, broadcasting: 1\nI0319 21:28:21.834762 1059 log.go:172] (0xc000928bb0) (0xc0006066e0) Stream removed, broadcasting: 3\nI0319 21:28:21.834775 1059 log.go:172] (0xc000928bb0) (0xc0003b94a0) Stream removed, broadcasting: 5\n" Mar 19 21:28:21.840: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 19 21:28:21.840: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 19 21:28:21.843: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 19 21:28:31.848: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 19 21:28:31.848: INFO: Waiting for statefulset status.replicas updated to 0 Mar 19 21:28:31.877: INFO: POD NODE PHASE GRACE CONDITIONS Mar 19 21:28:31.877: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:08 +0000 UTC }] Mar 19 21:28:31.877: INFO: Mar 19 21:28:31.877: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 19 21:28:32.882: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992338303s Mar 19 21:28:33.887: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987614244s Mar 19 21:28:34.897: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982826726s Mar 19 21:28:35.902: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.972798093s Mar 19 21:28:36.907: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.967972845s Mar 19 21:28:37.915: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.962989469s Mar 19 21:28:38.920: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.954530319s Mar 19 21:28:39.925: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.949424658s Mar 19 21:28:40.930: INFO: Verifying statefulset ss doesn't scale past 3 for another 944.297398ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5269 Mar 19 21:28:41.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5269 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 19 21:28:42.177: INFO: stderr: "I0319 21:28:42.074004 1093 log.go:172] (0xc000a00a50) (0xc0006339a0) Create stream\nI0319 21:28:42.074061 1093 log.go:172] (0xc000a00a50) (0xc0006339a0) Stream added, broadcasting: 1\nI0319 21:28:42.076699 1093 log.go:172] (0xc000a00a50) Reply frame received for 1\nI0319 21:28:42.076773 1093 log.go:172] (0xc000a00a50) (0xc000404000) Create stream\nI0319 21:28:42.076796 1093 log.go:172] (0xc000a00a50) (0xc000404000) Stream added, broadcasting: 3\nI0319 21:28:42.078016 1093 log.go:172] (0xc000a00a50) Reply frame received for 3\nI0319 21:28:42.078076 1093 log.go:172] (0xc000a00a50) (0xc000474000) Create stream\nI0319 21:28:42.078093 1093 log.go:172] (0xc000a00a50) (0xc000474000) Stream added, broadcasting: 5\nI0319 21:28:42.079031 1093 log.go:172] (0xc000a00a50) Reply frame received for 5\nI0319 21:28:42.170867 1093 log.go:172] (0xc000a00a50) Data frame received for 3\nI0319 21:28:42.170939 1093 log.go:172] (0xc000404000) (3) Data frame handling\nI0319 21:28:42.170967 1093 log.go:172] (0xc000404000) (3) Data frame sent\nI0319 21:28:42.170986 1093 log.go:172] (0xc000a00a50) Data frame received for 3\nI0319 21:28:42.171004 1093 log.go:172] (0xc000404000) (3) Data frame handling\nI0319 21:28:42.171030 1093 log.go:172] (0xc000a00a50) Data frame received for 5\nI0319 21:28:42.171060 1093 log.go:172] (0xc000474000) (5) Data frame handling\nI0319 21:28:42.171084 1093 log.go:172] (0xc000474000) (5) Data frame sent\nI0319 21:28:42.171098 1093 log.go:172] (0xc000a00a50) Data frame received for 5\nI0319 21:28:42.171114 1093 log.go:172] (0xc000474000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0319 21:28:42.172700 1093 log.go:172] (0xc000a00a50) Data frame received for 1\nI0319 21:28:42.172718 1093 log.go:172] (0xc0006339a0) (1) Data frame handling\nI0319 21:28:42.172735 1093 log.go:172] (0xc0006339a0) (1) Data frame sent\nI0319 21:28:42.173231 1093 log.go:172] (0xc000a00a50) (0xc0006339a0) Stream removed, broadcasting: 1\nI0319 21:28:42.173255 1093 log.go:172] (0xc000a00a50) Go away received\nI0319 21:28:42.173688 1093 log.go:172] (0xc000a00a50) (0xc0006339a0) Stream removed, broadcasting: 1\nI0319 21:28:42.173726 1093 log.go:172] (0xc000a00a50) (0xc000404000) Stream removed, broadcasting: 3\nI0319 21:28:42.173746 1093 log.go:172] (0xc000a00a50) (0xc000474000) Stream removed, broadcasting: 5\n" Mar 19 21:28:42.178: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 19 21:28:42.178: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 19 21:28:42.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5269 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 19 21:28:42.453: INFO: stderr: "I0319 21:28:42.367016 1115 log.go:172] (0xc0003c0210) (0xc00040b4a0) Create stream\nI0319 21:28:42.367102 1115 log.go:172] (0xc0003c0210) (0xc00040b4a0) Stream added, broadcasting: 1\nI0319 21:28:42.370764 1115 log.go:172] (0xc0003c0210) Reply frame received for 1\nI0319 21:28:42.370822 1115 log.go:172] (0xc0003c0210) (0xc0008cc000) Create stream\nI0319 21:28:42.370844 1115 log.go:172] (0xc0003c0210) (0xc0008cc000) Stream added, broadcasting: 3\nI0319 21:28:42.372181 1115 log.go:172] (0xc0003c0210) Reply frame received for 3\nI0319 21:28:42.372238 1115 log.go:172] (0xc0003c0210) (0xc0008cc0a0) Create stream\nI0319 21:28:42.372288 1115 log.go:172] (0xc0003c0210) (0xc0008cc0a0) Stream added, broadcasting: 5\nI0319 21:28:42.373677 1115 log.go:172] (0xc0003c0210) Reply frame received for 5\nI0319 21:28:42.445801 1115 log.go:172] (0xc0003c0210) Data frame received for 3\nI0319 21:28:42.445841 1115 log.go:172] (0xc0008cc000) (3) Data frame handling\nI0319 21:28:42.445863 1115 log.go:172] (0xc0008cc000) (3) Data frame sent\nI0319 21:28:42.445899 1115 log.go:172] (0xc0003c0210) Data frame received for 5\nI0319 21:28:42.445937 1115 log.go:172] (0xc0008cc0a0) (5) Data frame handling\nI0319 21:28:42.445981 1115 log.go:172] (0xc0008cc0a0) (5) Data frame sent\nI0319 21:28:42.445998 1115 log.go:172] (0xc0003c0210) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0319 21:28:42.446013 1115 log.go:172] (0xc0008cc0a0) (5) Data frame handling\nI0319 21:28:42.446037 1115 log.go:172] (0xc0003c0210) Data frame received for 3\nI0319 21:28:42.446052 1115 log.go:172] (0xc0008cc000) (3) Data frame handling\nI0319 21:28:42.447700 1115 log.go:172] (0xc0003c0210) Data frame received for 1\nI0319 21:28:42.447734 1115 log.go:172] (0xc00040b4a0) (1) Data frame handling\nI0319 21:28:42.447767 1115 log.go:172] (0xc00040b4a0) (1) Data frame sent\nI0319 21:28:42.447792 1115 log.go:172] (0xc0003c0210) (0xc00040b4a0) Stream removed, broadcasting: 1\nI0319 21:28:42.447968 1115 log.go:172] (0xc0003c0210) Go away received\nI0319 21:28:42.448445 1115 log.go:172] (0xc0003c0210) (0xc00040b4a0) Stream removed, broadcasting: 1\nI0319 21:28:42.448470 1115 log.go:172] (0xc0003c0210) (0xc0008cc000) Stream removed, broadcasting: 3\nI0319 21:28:42.448483 1115 log.go:172] (0xc0003c0210) (0xc0008cc0a0) Stream removed, broadcasting: 5\n" Mar 19 21:28:42.453: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 19 21:28:42.453: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 19 21:28:42.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5269 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 19 21:28:42.638: INFO: stderr: "I0319 21:28:42.578646 1135 log.go:172] (0xc0003eb1e0) (0xc000974140) Create stream\nI0319 21:28:42.578741 1135 log.go:172] (0xc0003eb1e0) (0xc000974140) Stream added, broadcasting: 1\nI0319 21:28:42.581325 1135 log.go:172] (0xc0003eb1e0) Reply frame received for 1\nI0319 21:28:42.581362 1135 log.go:172] (0xc0003eb1e0) (0xc000210000) Create stream\nI0319 21:28:42.581379 1135 log.go:172] (0xc0003eb1e0) (0xc000210000) Stream added, broadcasting: 3\nI0319 21:28:42.582761 1135 log.go:172] (0xc0003eb1e0) Reply frame received for 3\nI0319 21:28:42.582813 1135 log.go:172] (0xc0003eb1e0) (0xc0002100a0) Create stream\nI0319 21:28:42.582828 1135 log.go:172] (0xc0003eb1e0) (0xc0002100a0) Stream added, broadcasting: 5\nI0319 21:28:42.583903 1135 log.go:172] (0xc0003eb1e0) Reply frame received for 5\nI0319 21:28:42.632503 1135 log.go:172] (0xc0003eb1e0) Data frame received for 3\nI0319 21:28:42.632542 1135 log.go:172] (0xc000210000) (3) Data frame handling\nI0319 21:28:42.632554 1135 log.go:172] (0xc000210000) (3) Data frame sent\nI0319 21:28:42.632562 1135 log.go:172] (0xc0003eb1e0) Data frame received for 3\nI0319 21:28:42.632568 1135 log.go:172] (0xc000210000) (3) Data frame handling\nI0319 21:28:42.632613 1135 log.go:172] (0xc0003eb1e0) Data frame received for 5\nI0319 21:28:42.632637 1135 log.go:172] (0xc0002100a0) (5) Data frame handling\nI0319 21:28:42.632653 1135 log.go:172] (0xc0002100a0) (5) Data frame sent\nI0319 21:28:42.632661 1135 log.go:172] (0xc0003eb1e0) Data frame received for 5\nI0319 21:28:42.632667 1135 log.go:172] (0xc0002100a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0319 21:28:42.634255 1135 log.go:172] (0xc0003eb1e0) Data frame received for 1\nI0319 21:28:42.634283 1135 log.go:172] (0xc000974140) (1) Data frame handling\nI0319 21:28:42.634302 1135 log.go:172] (0xc000974140) (1) Data frame sent\nI0319 21:28:42.634326 1135 log.go:172] (0xc0003eb1e0) (0xc000974140) Stream removed, broadcasting: 1\nI0319 21:28:42.634354 1135 log.go:172] (0xc0003eb1e0) Go away received\nI0319 21:28:42.634743 1135 log.go:172] (0xc0003eb1e0) (0xc000974140) Stream removed, broadcasting: 1\nI0319 21:28:42.634769 1135 log.go:172] (0xc0003eb1e0) (0xc000210000) Stream removed, broadcasting: 3\nI0319 21:28:42.634779 1135 log.go:172] (0xc0003eb1e0) (0xc0002100a0) Stream removed, broadcasting: 5\n" Mar 19 21:28:42.638: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 19 21:28:42.638: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 19 21:28:42.641: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 19 21:28:42.641: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 19 21:28:42.641: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 19 21:28:42.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5269 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 19 21:28:42.837: INFO: stderr: "I0319 21:28:42.772348 1158 log.go:172] (0xc000566840) (0xc000552000) Create stream\nI0319 21:28:42.772406 1158 log.go:172] (0xc000566840) (0xc000552000) Stream added, broadcasting: 1\nI0319 21:28:42.774752 1158 log.go:172] (0xc000566840) Reply frame received for 1\nI0319 21:28:42.774801 1158 log.go:172] (0xc000566840) (0xc0005e1ae0) Create stream\nI0319 21:28:42.774821 1158 log.go:172] (0xc000566840) (0xc0005e1ae0) Stream added, broadcasting: 3\nI0319 21:28:42.775745 1158 log.go:172] (0xc000566840) Reply frame received for 3\nI0319 21:28:42.775795 1158 log.go:172] (0xc000566840) (0xc0008d2000) Create stream\nI0319 21:28:42.775812 1158 log.go:172] (0xc000566840) (0xc0008d2000) Stream added, broadcasting: 5\nI0319 21:28:42.776700 1158 log.go:172] (0xc000566840) Reply frame received for 5\nI0319 21:28:42.832014 1158 log.go:172] (0xc000566840) Data frame received for 3\nI0319 21:28:42.832041 1158 log.go:172] (0xc0005e1ae0) (3) Data frame handling\nI0319 21:28:42.832048 1158 log.go:172] (0xc0005e1ae0) (3) Data frame sent\nI0319 21:28:42.832064 1158 log.go:172] (0xc000566840) Data frame received for 5\nI0319 21:28:42.832068 1158 log.go:172] (0xc0008d2000) (5) Data frame handling\nI0319 21:28:42.832074 1158 log.go:172] (0xc0008d2000) (5) Data frame sent\nI0319 21:28:42.832079 1158 log.go:172] (0xc000566840) Data frame received for 5\nI0319 21:28:42.832084 1158 log.go:172] (0xc0008d2000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0319 21:28:42.832318 1158 log.go:172] (0xc000566840) Data frame received for 3\nI0319 21:28:42.832339 1158 log.go:172] (0xc0005e1ae0) (3) Data frame handling\nI0319 21:28:42.833720 1158 log.go:172] (0xc000566840) Data frame received for 1\nI0319 21:28:42.833747 1158 log.go:172] (0xc000552000) (1) Data frame handling\nI0319 21:28:42.833766 1158 log.go:172] (0xc000552000) (1) Data frame sent\nI0319 21:28:42.833791 1158 log.go:172] (0xc000566840) (0xc000552000) Stream removed, broadcasting: 1\nI0319 21:28:42.834064 1158 log.go:172] (0xc000566840) Go away received\nI0319 21:28:42.834106 1158 log.go:172] (0xc000566840) (0xc000552000) Stream removed, broadcasting: 1\nI0319 21:28:42.834128 1158 log.go:172] (0xc000566840) (0xc0005e1ae0) Stream removed, broadcasting: 3\nI0319 21:28:42.834147 1158 log.go:172] (0xc000566840) (0xc0008d2000) Stream removed, broadcasting: 5\n" Mar 19 21:28:42.838: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 19 21:28:42.838: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 19 21:28:42.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5269 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 19 21:28:43.079: INFO: stderr: "I0319 21:28:42.963161 1180 log.go:172] (0xc000a226e0) (0xc000619ae0) Create stream\nI0319 21:28:42.963238 1180 log.go:172] (0xc000a226e0) (0xc000619ae0) Stream added, broadcasting: 1\nI0319 21:28:42.966204 1180 log.go:172] (0xc000a226e0) Reply frame received for 1\nI0319 21:28:42.966248 1180 log.go:172] (0xc000a226e0) (0xc0009e8000) Create stream\nI0319 21:28:42.966263 1180 log.go:172] (0xc000a226e0) (0xc0009e8000) Stream added, broadcasting: 3\nI0319 21:28:42.967251 1180 log.go:172] (0xc000a226e0) Reply frame received for 3\nI0319 21:28:42.967274 1180 log.go:172] (0xc000a226e0) (0xc000619b80) Create stream\nI0319 21:28:42.967281 1180 log.go:172] (0xc000a226e0) (0xc000619b80) Stream added, broadcasting: 5\nI0319 21:28:42.968201 1180 log.go:172] (0xc000a226e0) Reply frame received for 5\nI0319 21:28:43.037959 1180 log.go:172] (0xc000a226e0) Data frame received for 5\nI0319 21:28:43.037987 1180 log.go:172] (0xc000619b80) (5) Data frame handling\nI0319 21:28:43.038007 1180 log.go:172] (0xc000619b80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0319 21:28:43.072470 1180 log.go:172] (0xc000a226e0) Data frame received for 3\nI0319 21:28:43.072503 1180 log.go:172] (0xc0009e8000) (3) Data frame handling\nI0319 21:28:43.072529 1180 log.go:172] (0xc0009e8000) (3) Data frame sent\nI0319 21:28:43.072548 1180 log.go:172] (0xc000a226e0) Data frame received for 3\nI0319 21:28:43.072559 1180 log.go:172] (0xc0009e8000) (3) Data frame handling\nI0319 21:28:43.072733 1180 log.go:172] (0xc000a226e0) Data frame received for 5\nI0319 21:28:43.072761 1180 log.go:172] (0xc000619b80) (5) Data frame handling\nI0319 21:28:43.074482 1180 log.go:172] (0xc000a226e0) Data frame received for 1\nI0319 21:28:43.074524 1180 log.go:172] (0xc000619ae0) (1) Data frame handling\nI0319 21:28:43.074556 1180 log.go:172] (0xc000619ae0) (1) Data frame sent\nI0319 21:28:43.074593 1180 log.go:172] (0xc000a226e0) (0xc000619ae0) Stream removed, broadcasting: 1\nI0319 21:28:43.074817 1180 log.go:172] (0xc000a226e0) Go away received\nI0319 21:28:43.075086 1180 log.go:172] (0xc000a226e0) (0xc000619ae0) Stream removed, broadcasting: 1\nI0319 21:28:43.075115 1180 log.go:172] (0xc000a226e0) (0xc0009e8000) Stream removed, broadcasting: 3\nI0319 21:28:43.075130 1180 log.go:172] (0xc000a226e0) (0xc000619b80) Stream removed, broadcasting: 5\n" Mar 19 21:28:43.079: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 19 21:28:43.079: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 19 21:28:43.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5269 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 19 21:28:43.323: INFO: stderr: "I0319 21:28:43.212434 1202 log.go:172] (0xc0000f54a0) (0xc0005e9f40) Create stream\nI0319 21:28:43.212489 1202 log.go:172] (0xc0000f54a0) (0xc0005e9f40) Stream added, broadcasting: 1\nI0319 21:28:43.215811 1202 log.go:172] (0xc0000f54a0) Reply frame received for 1\nI0319 21:28:43.215864 1202 log.go:172] (0xc0000f54a0) (0xc00052c820) Create stream\nI0319 21:28:43.215880 1202 log.go:172] (0xc0000f54a0) (0xc00052c820) Stream added, broadcasting: 3\nI0319 21:28:43.217083 1202 log.go:172] (0xc0000f54a0) Reply frame received for 3\nI0319 21:28:43.217254 1202 log.go:172] (0xc0000f54a0) (0xc0004fa640) Create stream\nI0319 21:28:43.217273 1202 log.go:172] (0xc0000f54a0) (0xc0004fa640) Stream added, broadcasting: 5\nI0319 21:28:43.218395 1202 log.go:172] (0xc0000f54a0) Reply frame received for 5\nI0319 21:28:43.285706 1202 log.go:172] (0xc0000f54a0) Data frame received for 5\nI0319 21:28:43.285738 1202 log.go:172] (0xc0004fa640) (5) Data frame handling\nI0319 21:28:43.285764 1202 log.go:172] (0xc0004fa640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0319 21:28:43.318219 1202 log.go:172] (0xc0000f54a0) Data frame received for 3\nI0319 21:28:43.318275 1202 log.go:172] (0xc00052c820) (3) Data frame handling\nI0319 21:28:43.318361 1202 log.go:172] (0xc00052c820) (3) Data frame sent\nI0319 21:28:43.318383 1202 log.go:172] (0xc0000f54a0) Data frame received for 3\nI0319 21:28:43.318397 1202 log.go:172] (0xc00052c820) (3) Data frame handling\nI0319 21:28:43.318441 1202 log.go:172] (0xc0000f54a0) Data frame received for 5\nI0319 21:28:43.318468 1202 log.go:172] (0xc0004fa640) (5) Data frame handling\nI0319 21:28:43.319949 1202 log.go:172] (0xc0000f54a0) Data frame received for 1\nI0319 21:28:43.319962 1202 log.go:172] (0xc0005e9f40) (1) Data frame handling\nI0319 21:28:43.319968 1202 log.go:172] (0xc0005e9f40) (1) Data frame sent\nI0319 21:28:43.319977 1202 log.go:172] (0xc0000f54a0) (0xc0005e9f40) Stream removed, broadcasting: 1\nI0319 21:28:43.320023 1202 log.go:172] (0xc0000f54a0) Go away received\nI0319 21:28:43.320211 1202 log.go:172] (0xc0000f54a0) (0xc0005e9f40) Stream removed, broadcasting: 1\nI0319 21:28:43.320222 1202 log.go:172] (0xc0000f54a0) (0xc00052c820) Stream removed, broadcasting: 3\nI0319 21:28:43.320227 1202 log.go:172] (0xc0000f54a0) (0xc0004fa640) Stream removed, broadcasting: 5\n" Mar 19 21:28:43.324: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 19 21:28:43.324: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 19 21:28:43.324: INFO: Waiting for statefulset status.replicas updated to 0 Mar 19 21:28:43.331: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 19 21:28:53.339: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 19 21:28:53.339: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 19 21:28:53.339: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 19 21:28:53.349: INFO: POD NODE PHASE GRACE CONDITIONS Mar 19 21:28:53.350: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:08 +0000 UTC }] Mar 19 21:28:53.350: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC }] Mar 19 21:28:53.350: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC }] Mar 19 21:28:53.350: INFO: Mar 19 21:28:53.350: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 19 21:28:54.400: INFO: POD NODE PHASE GRACE CONDITIONS Mar 19 21:28:54.400: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:08 +0000 UTC }] Mar 19 21:28:54.400: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC }] Mar 19 21:28:54.400: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC }] Mar 19 21:28:54.400: INFO: Mar 19 21:28:54.400: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 19 21:28:55.405: INFO: POD NODE PHASE GRACE CONDITIONS Mar 19 21:28:55.405: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:08 +0000 UTC }] Mar 19 21:28:55.405: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC }] Mar 19 21:28:55.405: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC }] Mar 19 21:28:55.405: INFO: Mar 19 21:28:55.405: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 19 21:28:56.409: INFO: POD NODE PHASE GRACE CONDITIONS Mar 19 21:28:56.409: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC }] Mar 19 21:28:56.409: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC }] Mar 19 21:28:56.409: INFO: Mar 19 21:28:56.409: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 19 21:28:57.413: INFO: POD NODE PHASE GRACE CONDITIONS Mar 19 21:28:57.413: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC }] Mar 19 21:28:57.413: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC }] Mar 19 21:28:57.413: INFO: Mar 19 21:28:57.413: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 19 21:28:58.417: INFO: POD NODE PHASE GRACE CONDITIONS Mar 19 21:28:58.417: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC }] Mar 19 21:28:58.418: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC }] Mar 19 21:28:58.418: INFO: Mar 19 21:28:58.418: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 19 21:28:59.422: INFO: POD NODE PHASE GRACE CONDITIONS Mar 19 21:28:59.422: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC }] Mar 19 21:28:59.422: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-19 21:28:31 +0000 UTC }] Mar 19 21:28:59.422: INFO: Mar 19 21:28:59.422: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 19 21:29:00.426: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.924259195s Mar 19 21:29:01.430: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.920183851s Mar 19 21:29:02.433: INFO: Verifying statefulset ss doesn't scale past 0 for another 916.41881ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5269 Mar 19 21:29:03.436: INFO: Scaling statefulset ss to 0 Mar 19 21:29:03.446: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 19 21:29:03.450: INFO: Deleting all statefulset in ns statefulset-5269 Mar 19 21:29:03.452: INFO: Scaling statefulset ss to 0 Mar 19 21:29:03.460: INFO: Waiting for statefulset status.replicas updated to 0 Mar 19 21:29:03.463: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:29:03.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5269" for this suite. • [SLOW TEST:54.931 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":60,"skipped":917,"failed":0} SS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:29:03.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 19 21:29:03.536: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Mar 19 21:29:04.008: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 19 21:29:06.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250144, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250144, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250144, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250143, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 19 21:29:08.861: INFO: Waited 569.437149ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:29:09.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3911" for this suite. • [SLOW TEST:6.179 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":61,"skipped":919,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:29:09.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 19 21:29:10.303: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 19 21:29:12.313: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250150, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250150, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250150, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250150, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 21:29:15.347: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:29:15.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4230-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:29:16.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8893" for this suite. STEP: Destroying namespace "webhook-8893-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.957 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":62,"skipped":935,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:29:16.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 19 21:29:16.747: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:29:33.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4418" for this suite. • [SLOW TEST:16.598 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":63,"skipped":937,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:29:33.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 19 21:29:37.315: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5067 PodName:pod-sharedvolume-72f34627-48f3-45f3-b044-346541684c21 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 21:29:37.315: INFO: >>> kubeConfig: /root/.kube/config I0319 21:29:37.354928 6 log.go:172] (0xc002a18420) (0xc0011abae0) Create stream I0319 21:29:37.354979 6 log.go:172] (0xc002a18420) (0xc0011abae0) Stream added, broadcasting: 1 I0319 21:29:37.356755 6 log.go:172] (0xc002a18420) Reply frame received for 1 I0319 21:29:37.356791 6 log.go:172] (0xc002a18420) (0xc00113e000) Create stream I0319 21:29:37.356798 6 log.go:172] (0xc002a18420) (0xc00113e000) Stream added, broadcasting: 3 I0319 21:29:37.357696 6 log.go:172] (0xc002a18420) Reply frame received for 3 I0319 21:29:37.357730 6 log.go:172] (0xc002a18420) (0xc00234e000) Create stream I0319 21:29:37.357741 6 log.go:172] (0xc002a18420) (0xc00234e000) Stream added, broadcasting: 5 I0319 21:29:37.358388 6 log.go:172] (0xc002a18420) Reply frame received for 5 I0319 21:29:37.406833 6 log.go:172] (0xc002a18420) Data frame received for 5 I0319 21:29:37.406863 6 log.go:172] (0xc00234e000) (5) Data frame handling I0319 21:29:37.406882 6 log.go:172] (0xc002a18420) Data frame received for 3 I0319 21:29:37.406892 6 log.go:172] (0xc00113e000) (3) Data frame handling I0319 21:29:37.406908 6 log.go:172] (0xc00113e000) (3) Data frame sent I0319 21:29:37.406920 6 log.go:172] (0xc002a18420) Data frame received for 3 I0319 21:29:37.406928 6 log.go:172] (0xc00113e000) (3) Data frame handling I0319 21:29:37.408094 6 log.go:172] (0xc002a18420) Data frame received for 1 I0319 21:29:37.408107 6 log.go:172] (0xc0011abae0) (1) Data frame handling I0319 21:29:37.408115 6 log.go:172] (0xc0011abae0) (1) Data frame sent I0319 21:29:37.408122 6 log.go:172] (0xc002a18420) (0xc0011abae0) Stream removed, broadcasting: 1 I0319 21:29:37.408204 6 log.go:172] (0xc002a18420) (0xc0011abae0) Stream removed, broadcasting: 1 I0319 21:29:37.408216 6 log.go:172] (0xc002a18420) (0xc00113e000) Stream removed, broadcasting: 3 I0319 21:29:37.408326 6 log.go:172] (0xc002a18420) (0xc00234e000) Stream removed, broadcasting: 5 Mar 19 21:29:37.408: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:29:37.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0319 21:29:37.408557 6 log.go:172] (0xc002a18420) Go away received STEP: Destroying namespace "emptydir-5067" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":64,"skipped":938,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:29:37.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5657 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5657 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5657 Mar 19 21:29:37.485: INFO: Found 0 stateful pods, waiting for 1 Mar 19 21:29:47.489: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 19 21:29:47.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5657 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 19 21:29:47.762: INFO: stderr: "I0319 21:29:47.640296 1223 log.go:172] (0xc000976630) (0xc0009341e0) Create stream\nI0319 21:29:47.640346 1223 log.go:172] (0xc000976630) (0xc0009341e0) Stream added, broadcasting: 1\nI0319 21:29:47.643083 1223 log.go:172] (0xc000976630) Reply frame received for 1\nI0319 21:29:47.643146 1223 log.go:172] (0xc000976630) (0xc00079d400) Create stream\nI0319 21:29:47.643163 1223 log.go:172] (0xc000976630) (0xc00079d400) Stream added, broadcasting: 3\nI0319 21:29:47.644302 1223 log.go:172] (0xc000976630) Reply frame received for 3\nI0319 21:29:47.644344 1223 log.go:172] (0xc000976630) (0xc000934280) Create stream\nI0319 21:29:47.644361 1223 log.go:172] (0xc000976630) (0xc000934280) Stream added, broadcasting: 5\nI0319 21:29:47.645455 1223 log.go:172] (0xc000976630) Reply frame received for 5\nI0319 21:29:47.716499 1223 log.go:172] (0xc000976630) Data frame received for 5\nI0319 21:29:47.716521 1223 log.go:172] (0xc000934280) (5) Data frame handling\nI0319 21:29:47.716534 1223 log.go:172] (0xc000934280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0319 21:29:47.755090 1223 log.go:172] (0xc000976630) Data frame received for 3\nI0319 21:29:47.755133 1223 log.go:172] (0xc00079d400) (3) Data frame handling\nI0319 21:29:47.755155 1223 log.go:172] (0xc00079d400) (3) Data frame sent\nI0319 21:29:47.755469 1223 log.go:172] (0xc000976630) Data frame received for 5\nI0319 21:29:47.755498 1223 log.go:172] (0xc000934280) (5) Data frame handling\nI0319 21:29:47.755542 1223 log.go:172] (0xc000976630) Data frame received for 3\nI0319 21:29:47.755569 1223 log.go:172] (0xc00079d400) (3) Data frame handling\nI0319 21:29:47.757519 1223 log.go:172] (0xc000976630) Data frame received for 1\nI0319 21:29:47.757549 1223 log.go:172] (0xc0009341e0) (1) Data frame handling\nI0319 21:29:47.757572 1223 log.go:172] (0xc0009341e0) (1) Data frame sent\nI0319 21:29:47.757693 1223 log.go:172] (0xc000976630) (0xc0009341e0) Stream removed, broadcasting: 1\nI0319 21:29:47.758060 1223 log.go:172] (0xc000976630) Go away received\nI0319 21:29:47.758110 1223 log.go:172] (0xc000976630) (0xc0009341e0) Stream removed, broadcasting: 1\nI0319 21:29:47.758128 1223 log.go:172] (0xc000976630) (0xc00079d400) Stream removed, broadcasting: 3\nI0319 21:29:47.758140 1223 log.go:172] (0xc000976630) (0xc000934280) Stream removed, broadcasting: 5\n" Mar 19 21:29:47.762: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 19 21:29:47.762: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 19 21:29:47.772: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 19 21:29:57.776: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 19 21:29:57.776: INFO: Waiting for statefulset status.replicas updated to 0 Mar 19 21:29:57.835: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999491s Mar 19 21:29:58.840: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.951468499s Mar 19 21:29:59.844: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.946873983s Mar 19 21:30:00.848: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.942533528s Mar 19 21:30:01.852: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.938921544s Mar 19 21:30:02.856: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.934849686s Mar 19 21:30:03.861: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.930327064s Mar 19 21:30:04.865: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.925638907s Mar 19 21:30:05.869: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.921145881s Mar 19 21:30:06.873: INFO: Verifying statefulset ss doesn't scale past 1 for another 917.506286ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5657 Mar 19 21:30:07.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5657 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 19 21:30:08.086: INFO: stderr: "I0319 21:30:08.007137 1245 log.go:172] (0xc0009ba2c0) (0xc0009c2140) Create stream\nI0319 21:30:08.007192 1245 log.go:172] (0xc0009ba2c0) (0xc0009c2140) Stream added, broadcasting: 1\nI0319 21:30:08.009696 1245 log.go:172] (0xc0009ba2c0) Reply frame received for 1\nI0319 21:30:08.009747 1245 log.go:172] (0xc0009ba2c0) (0xc0009aa320) Create stream\nI0319 21:30:08.009763 1245 log.go:172] (0xc0009ba2c0) (0xc0009aa320) Stream added, broadcasting: 3\nI0319 21:30:08.010687 1245 log.go:172] (0xc0009ba2c0) Reply frame received for 3\nI0319 21:30:08.010720 1245 log.go:172] (0xc0009ba2c0) (0xc0008b2000) Create stream\nI0319 21:30:08.010730 1245 log.go:172] (0xc0009ba2c0) (0xc0008b2000) Stream added, broadcasting: 5\nI0319 21:30:08.011494 1245 log.go:172] (0xc0009ba2c0) Reply frame received for 5\nI0319 21:30:08.079316 1245 log.go:172] (0xc0009ba2c0) Data frame received for 5\nI0319 21:30:08.079334 1245 log.go:172] (0xc0008b2000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0319 21:30:08.079363 1245 log.go:172] (0xc0009ba2c0) Data frame received for 3\nI0319 21:30:08.079388 1245 log.go:172] (0xc0009aa320) (3) Data frame handling\nI0319 21:30:08.079401 1245 log.go:172] (0xc0009aa320) (3) Data frame sent\nI0319 21:30:08.079411 1245 log.go:172] (0xc0009ba2c0) Data frame received for 3\nI0319 21:30:08.079417 1245 log.go:172] (0xc0009aa320) (3) Data frame handling\nI0319 21:30:08.079435 1245 log.go:172] (0xc0008b2000) (5) Data frame sent\nI0319 21:30:08.079744 1245 log.go:172] (0xc0009ba2c0) Data frame received for 5\nI0319 21:30:08.079770 1245 log.go:172] (0xc0008b2000) (5) Data frame handling\nI0319 21:30:08.081086 1245 log.go:172] (0xc0009ba2c0) Data frame received for 1\nI0319 21:30:08.081105 1245 log.go:172] (0xc0009c2140) (1) Data frame handling\nI0319 21:30:08.081210 1245 log.go:172] (0xc0009c2140) (1) Data frame sent\nI0319 21:30:08.081326 1245 log.go:172] (0xc0009ba2c0) (0xc0009c2140) Stream removed, broadcasting: 1\nI0319 21:30:08.081544 1245 log.go:172] (0xc0009ba2c0) Go away received\nI0319 21:30:08.081763 1245 log.go:172] (0xc0009ba2c0) (0xc0009c2140) Stream removed, broadcasting: 1\nI0319 21:30:08.081789 1245 log.go:172] (0xc0009ba2c0) (0xc0009aa320) Stream removed, broadcasting: 3\nI0319 21:30:08.081802 1245 log.go:172] (0xc0009ba2c0) (0xc0008b2000) Stream removed, broadcasting: 5\n" Mar 19 21:30:08.086: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 19 21:30:08.086: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 19 21:30:08.090: INFO: Found 1 stateful pods, waiting for 3 Mar 19 21:30:18.094: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 19 21:30:18.094: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 19 21:30:18.094: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 19 21:30:18.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5657 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 19 21:30:18.325: INFO: stderr: "I0319 21:30:18.253055 1265 log.go:172] (0xc000a9d550) (0xc000a5e6e0) Create stream\nI0319 21:30:18.253234 1265 log.go:172] (0xc000a9d550) (0xc000a5e6e0) Stream added, broadcasting: 1\nI0319 21:30:18.258714 1265 log.go:172] (0xc000a9d550) Reply frame received for 1\nI0319 21:30:18.258748 1265 log.go:172] (0xc000a9d550) (0xc000694500) Create stream\nI0319 21:30:18.258756 1265 log.go:172] (0xc000a9d550) (0xc000694500) Stream added, broadcasting: 3\nI0319 21:30:18.259834 1265 log.go:172] (0xc000a9d550) Reply frame received for 3\nI0319 21:30:18.259873 1265 log.go:172] (0xc000a9d550) (0xc0005112c0) Create stream\nI0319 21:30:18.259882 1265 log.go:172] (0xc000a9d550) (0xc0005112c0) Stream added, broadcasting: 5\nI0319 21:30:18.261058 1265 log.go:172] (0xc000a9d550) Reply frame received for 5\nI0319 21:30:18.318731 1265 log.go:172] (0xc000a9d550) Data frame received for 3\nI0319 21:30:18.318776 1265 log.go:172] (0xc000a9d550) Data frame received for 5\nI0319 21:30:18.318835 1265 log.go:172] (0xc0005112c0) (5) Data frame handling\nI0319 21:30:18.318865 1265 log.go:172] (0xc0005112c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0319 21:30:18.318898 1265 log.go:172] (0xc000694500) (3) Data frame handling\nI0319 21:30:18.318926 1265 log.go:172] (0xc000694500) (3) Data frame sent\nI0319 21:30:18.319198 1265 log.go:172] (0xc000a9d550) Data frame received for 3\nI0319 21:30:18.319246 1265 log.go:172] (0xc000694500) (3) Data frame handling\nI0319 21:30:18.319288 1265 log.go:172] (0xc000a9d550) Data frame received for 5\nI0319 21:30:18.319312 1265 log.go:172] (0xc0005112c0) (5) Data frame handling\nI0319 21:30:18.321236 1265 log.go:172] (0xc000a9d550) Data frame received for 1\nI0319 21:30:18.321277 1265 log.go:172] (0xc000a5e6e0) (1) Data frame handling\nI0319 21:30:18.321300 1265 log.go:172] (0xc000a5e6e0) (1) Data frame sent\nI0319 21:30:18.321316 1265 log.go:172] (0xc000a9d550) (0xc000a5e6e0) Stream removed, broadcasting: 1\nI0319 21:30:18.321426 1265 log.go:172] (0xc000a9d550) Go away received\nI0319 21:30:18.321731 1265 log.go:172] (0xc000a9d550) (0xc000a5e6e0) Stream removed, broadcasting: 1\nI0319 21:30:18.321754 1265 log.go:172] (0xc000a9d550) (0xc000694500) Stream removed, broadcasting: 3\nI0319 21:30:18.321770 1265 log.go:172] (0xc000a9d550) (0xc0005112c0) Stream removed, broadcasting: 5\n" Mar 19 21:30:18.326: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 19 21:30:18.326: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 19 21:30:18.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5657 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 19 21:30:18.587: INFO: stderr: "I0319 21:30:18.466991 1286 log.go:172] (0xc000604a50) (0xc0008be000) Create stream\nI0319 21:30:18.467052 1286 log.go:172] (0xc000604a50) (0xc0008be000) Stream added, broadcasting: 1\nI0319 21:30:18.469844 1286 log.go:172] (0xc000604a50) Reply frame received for 1\nI0319 21:30:18.469889 1286 log.go:172] (0xc000604a50) (0xc000a84000) Create stream\nI0319 21:30:18.469905 1286 log.go:172] (0xc000604a50) (0xc000a84000) Stream added, broadcasting: 3\nI0319 21:30:18.470845 1286 log.go:172] (0xc000604a50) Reply frame received for 3\nI0319 21:30:18.470889 1286 log.go:172] (0xc000604a50) (0xc000683b80) Create stream\nI0319 21:30:18.470908 1286 log.go:172] (0xc000604a50) (0xc000683b80) Stream added, broadcasting: 5\nI0319 21:30:18.471835 1286 log.go:172] (0xc000604a50) Reply frame received for 5\nI0319 21:30:18.526820 1286 log.go:172] (0xc000604a50) Data frame received for 5\nI0319 21:30:18.526858 1286 log.go:172] (0xc000683b80) (5) Data frame handling\nI0319 21:30:18.526879 1286 log.go:172] (0xc000683b80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0319 21:30:18.580294 1286 log.go:172] (0xc000604a50) Data frame received for 3\nI0319 21:30:18.580339 1286 log.go:172] (0xc000a84000) (3) Data frame handling\nI0319 21:30:18.580362 1286 log.go:172] (0xc000a84000) (3) Data frame sent\nI0319 21:30:18.580443 1286 log.go:172] (0xc000604a50) Data frame received for 3\nI0319 21:30:18.580464 1286 log.go:172] (0xc000a84000) (3) Data frame handling\nI0319 21:30:18.580923 1286 log.go:172] (0xc000604a50) Data frame received for 5\nI0319 21:30:18.580944 1286 log.go:172] (0xc000683b80) (5) Data frame handling\nI0319 21:30:18.582493 1286 log.go:172] (0xc000604a50) Data frame received for 1\nI0319 21:30:18.582511 1286 log.go:172] (0xc0008be000) (1) Data frame handling\nI0319 21:30:18.582523 1286 log.go:172] (0xc0008be000) (1) Data frame sent\nI0319 21:30:18.582627 1286 log.go:172] (0xc000604a50) (0xc0008be000) Stream removed, broadcasting: 1\nI0319 21:30:18.582688 1286 log.go:172] (0xc000604a50) Go away received\nI0319 21:30:18.583052 1286 log.go:172] (0xc000604a50) (0xc0008be000) Stream removed, broadcasting: 1\nI0319 21:30:18.583072 1286 log.go:172] (0xc000604a50) (0xc000a84000) Stream removed, broadcasting: 3\nI0319 21:30:18.583084 1286 log.go:172] (0xc000604a50) (0xc000683b80) Stream removed, broadcasting: 5\n" Mar 19 21:30:18.587: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 19 21:30:18.587: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 19 21:30:18.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5657 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 19 21:30:18.797: INFO: stderr: "I0319 21:30:18.700098 1309 log.go:172] (0xc0008cc000) (0xc0009cc000) Create stream\nI0319 21:30:18.700163 1309 log.go:172] (0xc0008cc000) (0xc0009cc000) Stream added, broadcasting: 1\nI0319 21:30:18.703785 1309 log.go:172] (0xc0008cc000) Reply frame received for 1\nI0319 21:30:18.703841 1309 log.go:172] (0xc0008cc000) (0xc000904000) Create stream\nI0319 21:30:18.703891 1309 log.go:172] (0xc0008cc000) (0xc000904000) Stream added, broadcasting: 3\nI0319 21:30:18.705014 1309 log.go:172] (0xc0008cc000) Reply frame received for 3\nI0319 21:30:18.705061 1309 log.go:172] (0xc0008cc000) (0xc0009040a0) Create stream\nI0319 21:30:18.705079 1309 log.go:172] (0xc0008cc000) (0xc0009040a0) Stream added, broadcasting: 5\nI0319 21:30:18.706271 1309 log.go:172] (0xc0008cc000) Reply frame received for 5\nI0319 21:30:18.766814 1309 log.go:172] (0xc0008cc000) Data frame received for 5\nI0319 21:30:18.766839 1309 log.go:172] (0xc0009040a0) (5) Data frame handling\nI0319 21:30:18.766856 1309 log.go:172] (0xc0009040a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0319 21:30:18.790675 1309 log.go:172] (0xc0008cc000) Data frame received for 3\nI0319 21:30:18.790725 1309 log.go:172] (0xc000904000) (3) Data frame handling\nI0319 21:30:18.790760 1309 log.go:172] (0xc000904000) (3) Data frame sent\nI0319 21:30:18.790781 1309 log.go:172] (0xc0008cc000) Data frame received for 3\nI0319 21:30:18.790795 1309 log.go:172] (0xc000904000) (3) Data frame handling\nI0319 21:30:18.790852 1309 log.go:172] (0xc0008cc000) Data frame received for 5\nI0319 21:30:18.790871 1309 log.go:172] (0xc0009040a0) (5) Data frame handling\nI0319 21:30:18.792841 1309 log.go:172] (0xc0008cc000) Data frame received for 1\nI0319 21:30:18.792866 1309 log.go:172] (0xc0009cc000) (1) Data frame handling\nI0319 21:30:18.792891 1309 log.go:172] (0xc0009cc000) (1) Data frame sent\nI0319 21:30:18.792912 1309 log.go:172] (0xc0008cc000) (0xc0009cc000) Stream removed, broadcasting: 1\nI0319 21:30:18.793079 1309 log.go:172] (0xc0008cc000) Go away received\nI0319 21:30:18.793422 1309 log.go:172] (0xc0008cc000) (0xc0009cc000) Stream removed, broadcasting: 1\nI0319 21:30:18.793442 1309 log.go:172] (0xc0008cc000) (0xc000904000) Stream removed, broadcasting: 3\nI0319 21:30:18.793450 1309 log.go:172] (0xc0008cc000) (0xc0009040a0) Stream removed, broadcasting: 5\n" Mar 19 21:30:18.798: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 19 21:30:18.798: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 19 21:30:18.798: INFO: Waiting for statefulset status.replicas updated to 0 Mar 19 21:30:18.801: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 19 21:30:28.818: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 19 21:30:28.818: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 19 21:30:28.818: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 19 21:30:28.861: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999529s Mar 19 21:30:29.866: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.962337485s Mar 19 21:30:30.870: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.958220546s Mar 19 21:30:31.875: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.953695189s Mar 19 21:30:32.879: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.949164439s Mar 19 21:30:33.884: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.94455704s Mar 19 21:30:34.889: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.939889995s Mar 19 21:30:35.894: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.935052036s Mar 19 21:30:36.898: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.929792731s Mar 19 21:30:37.903: INFO: Verifying statefulset ss doesn't scale past 3 for another 925.679668ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5657 Mar 19 21:30:38.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5657 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 19 21:30:39.159: INFO: stderr: "I0319 21:30:39.056323 1327 log.go:172] (0xc0006f8a50) (0xc0006cc000) Create stream\nI0319 21:30:39.056381 1327 log.go:172] (0xc0006f8a50) (0xc0006cc000) Stream added, broadcasting: 1\nI0319 21:30:39.059057 1327 log.go:172] (0xc0006f8a50) Reply frame received for 1\nI0319 21:30:39.059120 1327 log.go:172] (0xc0006f8a50) (0xc0006cc0a0) Create stream\nI0319 21:30:39.059146 1327 log.go:172] (0xc0006f8a50) (0xc0006cc0a0) Stream added, broadcasting: 3\nI0319 21:30:39.060214 1327 log.go:172] (0xc0006f8a50) Reply frame received for 3\nI0319 21:30:39.060248 1327 log.go:172] (0xc0006f8a50) (0xc000aac000) Create stream\nI0319 21:30:39.060260 1327 log.go:172] (0xc0006f8a50) (0xc000aac000) Stream added, broadcasting: 5\nI0319 21:30:39.061280 1327 log.go:172] (0xc0006f8a50) Reply frame received for 5\nI0319 21:30:39.152577 1327 log.go:172] (0xc0006f8a50) Data frame received for 3\nI0319 21:30:39.152601 1327 log.go:172] (0xc0006cc0a0) (3) Data frame handling\nI0319 21:30:39.152623 1327 log.go:172] (0xc0006cc0a0) (3) Data frame sent\nI0319 21:30:39.152630 1327 log.go:172] (0xc0006f8a50) Data frame received for 3\nI0319 21:30:39.152637 1327 log.go:172] (0xc0006cc0a0) (3) Data frame handling\nI0319 21:30:39.152747 1327 log.go:172] (0xc0006f8a50) Data frame received for 5\nI0319 21:30:39.152767 1327 log.go:172] (0xc000aac000) (5) Data frame handling\nI0319 21:30:39.152784 1327 log.go:172] (0xc000aac000) (5) Data frame sent\nI0319 21:30:39.152794 1327 log.go:172] (0xc0006f8a50) Data frame received for 5\nI0319 21:30:39.152805 1327 log.go:172] (0xc000aac000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0319 21:30:39.154639 1327 log.go:172] (0xc0006f8a50) Data frame received for 1\nI0319 21:30:39.154675 1327 log.go:172] (0xc0006cc000) (1) Data frame handling\nI0319 21:30:39.154708 1327 log.go:172] (0xc0006cc000) (1) Data frame sent\nI0319 21:30:39.154741 1327 log.go:172] (0xc0006f8a50) (0xc0006cc000) Stream removed, broadcasting: 1\nI0319 21:30:39.154904 1327 log.go:172] (0xc0006f8a50) Go away received\nI0319 21:30:39.155227 1327 log.go:172] (0xc0006f8a50) (0xc0006cc000) Stream removed, broadcasting: 1\nI0319 21:30:39.155268 1327 log.go:172] (0xc0006f8a50) (0xc0006cc0a0) Stream removed, broadcasting: 3\nI0319 21:30:39.155296 1327 log.go:172] (0xc0006f8a50) (0xc000aac000) Stream removed, broadcasting: 5\n" Mar 19 21:30:39.159: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 19 21:30:39.159: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 19 21:30:39.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5657 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 19 21:30:39.383: INFO: stderr: "I0319 21:30:39.303330 1350 log.go:172] (0xc000026dc0) (0xc00097e000) Create stream\nI0319 21:30:39.303397 1350 log.go:172] (0xc000026dc0) (0xc00097e000) Stream added, broadcasting: 1\nI0319 21:30:39.306168 1350 log.go:172] (0xc000026dc0) Reply frame received for 1\nI0319 21:30:39.306206 1350 log.go:172] (0xc000026dc0) (0xc0006a1a40) Create stream\nI0319 21:30:39.306214 1350 log.go:172] (0xc000026dc0) (0xc0006a1a40) Stream added, broadcasting: 3\nI0319 21:30:39.307051 1350 log.go:172] (0xc000026dc0) Reply frame received for 3\nI0319 21:30:39.307089 1350 log.go:172] (0xc000026dc0) (0xc00097e0a0) Create stream\nI0319 21:30:39.307107 1350 log.go:172] (0xc000026dc0) (0xc00097e0a0) Stream added, broadcasting: 5\nI0319 21:30:39.307990 1350 log.go:172] (0xc000026dc0) Reply frame received for 5\nI0319 21:30:39.376766 1350 log.go:172] (0xc000026dc0) Data frame received for 3\nI0319 21:30:39.376822 1350 log.go:172] (0xc0006a1a40) (3) Data frame handling\nI0319 21:30:39.376847 1350 log.go:172] (0xc0006a1a40) (3) Data frame sent\nI0319 21:30:39.376868 1350 log.go:172] (0xc000026dc0) Data frame received for 5\nI0319 21:30:39.376891 1350 log.go:172] (0xc00097e0a0) (5) Data frame handling\nI0319 21:30:39.376900 1350 log.go:172] (0xc00097e0a0) (5) Data frame sent\nI0319 21:30:39.376911 1350 log.go:172] (0xc000026dc0) Data frame received for 5\nI0319 21:30:39.376919 1350 log.go:172] (0xc00097e0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0319 21:30:39.376968 1350 log.go:172] (0xc000026dc0) Data frame received for 3\nI0319 21:30:39.377016 1350 log.go:172] (0xc0006a1a40) (3) Data frame handling\nI0319 21:30:39.379089 1350 log.go:172] (0xc000026dc0) Data frame received for 1\nI0319 21:30:39.379120 1350 log.go:172] (0xc00097e000) (1) Data frame handling\nI0319 21:30:39.379138 1350 log.go:172] (0xc00097e000) (1) Data frame sent\nI0319 21:30:39.379160 1350 log.go:172] (0xc000026dc0) (0xc00097e000) Stream removed, broadcasting: 1\nI0319 21:30:39.379183 1350 log.go:172] (0xc000026dc0) Go away received\nI0319 21:30:39.379569 1350 log.go:172] (0xc000026dc0) (0xc00097e000) Stream removed, broadcasting: 1\nI0319 21:30:39.379593 1350 log.go:172] (0xc000026dc0) (0xc0006a1a40) Stream removed, broadcasting: 3\nI0319 21:30:39.379606 1350 log.go:172] (0xc000026dc0) (0xc00097e0a0) Stream removed, broadcasting: 5\n" Mar 19 21:30:39.384: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 19 21:30:39.384: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 19 21:30:39.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5657 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 19 21:30:39.574: INFO: stderr: "I0319 21:30:39.512945 1371 log.go:172] (0xc00099a000) (0xc00078c0a0) Create stream\nI0319 21:30:39.513012 1371 log.go:172] (0xc00099a000) (0xc00078c0a0) Stream added, broadcasting: 1\nI0319 21:30:39.515111 1371 log.go:172] (0xc00099a000) Reply frame received for 1\nI0319 21:30:39.515166 1371 log.go:172] (0xc00099a000) (0xc000936000) Create stream\nI0319 21:30:39.515186 1371 log.go:172] (0xc00099a000) (0xc000936000) Stream added, broadcasting: 3\nI0319 21:30:39.516120 1371 log.go:172] (0xc00099a000) Reply frame received for 3\nI0319 21:30:39.516138 1371 log.go:172] (0xc00099a000) (0xc0009360a0) Create stream\nI0319 21:30:39.516144 1371 log.go:172] (0xc00099a000) (0xc0009360a0) Stream added, broadcasting: 5\nI0319 21:30:39.517021 1371 log.go:172] (0xc00099a000) Reply frame received for 5\nI0319 21:30:39.568848 1371 log.go:172] (0xc00099a000) Data frame received for 3\nI0319 21:30:39.568885 1371 log.go:172] (0xc000936000) (3) Data frame handling\nI0319 21:30:39.568898 1371 log.go:172] (0xc000936000) (3) Data frame sent\nI0319 21:30:39.568907 1371 log.go:172] (0xc00099a000) Data frame received for 3\nI0319 21:30:39.568922 1371 log.go:172] (0xc000936000) (3) Data frame handling\nI0319 21:30:39.568940 1371 log.go:172] (0xc00099a000) Data frame received for 5\nI0319 21:30:39.568947 1371 log.go:172] (0xc0009360a0) (5) Data frame handling\nI0319 21:30:39.568954 1371 log.go:172] (0xc0009360a0) (5) Data frame sent\nI0319 21:30:39.568960 1371 log.go:172] (0xc00099a000) Data frame received for 5\nI0319 21:30:39.568966 1371 log.go:172] (0xc0009360a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0319 21:30:39.570384 1371 log.go:172] (0xc00099a000) Data frame received for 1\nI0319 21:30:39.570408 1371 log.go:172] (0xc00078c0a0) (1) Data frame handling\nI0319 21:30:39.570451 1371 log.go:172] (0xc00078c0a0) (1) Data frame sent\nI0319 21:30:39.570532 1371 log.go:172] (0xc00099a000) (0xc00078c0a0) Stream removed, broadcasting: 1\nI0319 21:30:39.570575 1371 log.go:172] (0xc00099a000) Go away received\nI0319 21:30:39.570887 1371 log.go:172] (0xc00099a000) (0xc00078c0a0) Stream removed, broadcasting: 1\nI0319 21:30:39.570907 1371 log.go:172] (0xc00099a000) (0xc000936000) Stream removed, broadcasting: 3\nI0319 21:30:39.570917 1371 log.go:172] (0xc00099a000) (0xc0009360a0) Stream removed, broadcasting: 5\n" Mar 19 21:30:39.574: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 19 21:30:39.574: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 19 21:30:39.574: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 19 21:30:59.600: INFO: Deleting all statefulset in ns statefulset-5657 Mar 19 21:30:59.603: INFO: Scaling statefulset ss to 0 Mar 19 21:30:59.613: INFO: Waiting for statefulset status.replicas updated to 0 Mar 19 21:30:59.616: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:30:59.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5657" for this suite. • [SLOW TEST:82.227 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":65,"skipped":990,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:30:59.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:30:59.729: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 19 21:31:04.735: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 19 21:31:04.735: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 19 21:31:08.778: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-8822 /apis/apps/v1/namespaces/deployment-8822/deployments/test-cleanup-deployment 9a75115e-70e1-421f-86b8-0bab16e8accd 1116998 1 2020-03-19 21:31:04 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003ada868 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-19 21:31:04 +0000 UTC,LastTransitionTime:2020-03-19 21:31:04 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-03-19 21:31:07 +0000 UTC,LastTransitionTime:2020-03-19 21:31:04 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 19 21:31:08.781: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-8822 /apis/apps/v1/namespaces/deployment-8822/replicasets/test-cleanup-deployment-55ffc6b7b6 ee444895-304b-4333-9b20-fd83607dc4b2 1116987 1 2020-03-19 21:31:04 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 9a75115e-70e1-421f-86b8-0bab16e8accd 0xc003adad27 0xc003adad28}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003adade8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 19 21:31:08.784: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-nzln9" is available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-nzln9 test-cleanup-deployment-55ffc6b7b6- deployment-8822 /api/v1/namespaces/deployment-8822/pods/test-cleanup-deployment-55ffc6b7b6-nzln9 66475012-6ace-43bb-8684-223965cdc42c 1116986 0 2020-03-19 21:31:04 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 ee444895-304b-4333-9b20-fd83607dc4b2 0xc003adb2a7 0xc003adb2a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wklfg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wklfg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wklfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:31:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:31:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:31:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:31:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.217,StartTime:2020-03-19 21:31:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-19 21:31:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://ba5956d1db606d98a890e4487b219658fe90f93707a38c8996405ff92c10417c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.217,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:31:08.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8822" for this suite. • [SLOW TEST:9.148 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":66,"skipped":1005,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:31:08.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 19 21:31:08.888: INFO: Waiting up to 5m0s for pod "pod-9fd652dc-6d59-464d-a945-04ac05948f86" in namespace "emptydir-2088" to be "success or failure" Mar 19 21:31:08.899: INFO: Pod "pod-9fd652dc-6d59-464d-a945-04ac05948f86": Phase="Pending", Reason="", readiness=false. Elapsed: 10.361411ms Mar 19 21:31:10.903: INFO: Pod "pod-9fd652dc-6d59-464d-a945-04ac05948f86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014721177s Mar 19 21:31:12.907: INFO: Pod "pod-9fd652dc-6d59-464d-a945-04ac05948f86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018901069s STEP: Saw pod success Mar 19 21:31:12.907: INFO: Pod "pod-9fd652dc-6d59-464d-a945-04ac05948f86" satisfied condition "success or failure" Mar 19 21:31:12.910: INFO: Trying to get logs from node jerma-worker pod pod-9fd652dc-6d59-464d-a945-04ac05948f86 container test-container: STEP: delete the pod Mar 19 21:31:12.953: INFO: Waiting for pod pod-9fd652dc-6d59-464d-a945-04ac05948f86 to disappear Mar 19 21:31:12.982: INFO: Pod pod-9fd652dc-6d59-464d-a945-04ac05948f86 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:31:12.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2088" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1042,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:31:12.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 19 21:31:13.056: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 19 21:31:13.067: INFO: Waiting for terminating namespaces to be deleted... Mar 19 21:31:13.077: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 19 21:31:13.082: INFO: test-cleanup-deployment-55ffc6b7b6-nzln9 from deployment-8822 started at 2020-03-19 21:31:04 +0000 UTC (1 container statuses recorded) Mar 19 21:31:13.082: INFO: Container agnhost ready: true, restart count 0 Mar 19 21:31:13.082: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 21:31:13.082: INFO: Container kindnet-cni ready: true, restart count 0 Mar 19 21:31:13.082: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 21:31:13.082: INFO: Container kube-proxy ready: true, restart count 0 Mar 19 21:31:13.082: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 19 21:31:13.100: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 21:31:13.100: INFO: Container kindnet-cni ready: true, restart count 0 Mar 19 21:31:13.100: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 21:31:13.100: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e2ed9531-0649-48d2-9225-30de4d8018eb 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-e2ed9531-0649-48d2-9225-30de4d8018eb off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-e2ed9531-0649-48d2-9225-30de4d8018eb [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:31:29.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7994" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.397 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":68,"skipped":1046,"failed":0} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:31:29.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:31:29.508: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.65407ms) Mar 19 21:31:29.512: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.500511ms) Mar 19 21:31:29.515: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.118522ms) Mar 19 21:31:29.518: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.794226ms) Mar 19 21:31:29.521: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.221312ms) Mar 19 21:31:29.524: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.487693ms) Mar 19 21:31:29.528: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.966463ms) Mar 19 21:31:29.532: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.134144ms) Mar 19 21:31:29.535: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.282482ms) Mar 19 21:31:29.538: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.192894ms) Mar 19 21:31:29.542: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.349424ms) Mar 19 21:31:29.545: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.480345ms) Mar 19 21:31:29.548: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.289949ms) Mar 19 21:31:29.552: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.512175ms) Mar 19 21:31:29.556: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.548746ms) Mar 19 21:31:29.569: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 13.805038ms) Mar 19 21:31:29.572: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.99884ms) Mar 19 21:31:29.576: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.690971ms) Mar 19 21:31:29.580: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.635932ms) Mar 19 21:31:29.583: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.232703ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:31:29.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8598" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":69,"skipped":1051,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:31:29.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1692 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 19 21:31:29.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1077' Mar 19 21:31:29.775: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 19 21:31:29.775: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: rolling-update to same image controller Mar 19 21:31:29.818: INFO: scanned /root for discovery docs: Mar 19 21:31:29.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1077' Mar 19 21:31:45.677: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 19 21:31:45.678: INFO: stdout: "Created e2e-test-httpd-rc-8df5c1f4da38e2ecb644593bac9cb122\nScaling up e2e-test-httpd-rc-8df5c1f4da38e2ecb644593bac9cb122 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-8df5c1f4da38e2ecb644593bac9cb122 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-8df5c1f4da38e2ecb644593bac9cb122 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Mar 19 21:31:45.678: INFO: stdout: "Created e2e-test-httpd-rc-8df5c1f4da38e2ecb644593bac9cb122\nScaling up e2e-test-httpd-rc-8df5c1f4da38e2ecb644593bac9cb122 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-8df5c1f4da38e2ecb644593bac9cb122 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-8df5c1f4da38e2ecb644593bac9cb122 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Mar 19 21:31:45.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-1077' Mar 19 21:31:45.772: INFO: stderr: "" Mar 19 21:31:45.772: INFO: stdout: "e2e-test-httpd-rc-8df5c1f4da38e2ecb644593bac9cb122-xcj82 " Mar 19 21:31:45.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-8df5c1f4da38e2ecb644593bac9cb122-xcj82 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1077' Mar 19 21:31:45.880: INFO: stderr: "" Mar 19 21:31:45.880: INFO: stdout: "true" Mar 19 21:31:45.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-8df5c1f4da38e2ecb644593bac9cb122-xcj82 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1077' Mar 19 21:31:45.972: INFO: stderr: "" Mar 19 21:31:45.972: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Mar 19 21:31:45.972: INFO: e2e-test-httpd-rc-8df5c1f4da38e2ecb644593bac9cb122-xcj82 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1698 Mar 19 21:31:45.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1077' Mar 19 21:31:46.076: INFO: stderr: "" Mar 19 21:31:46.076: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:31:46.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1077" for this suite. • [SLOW TEST:16.549 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1687 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":70,"skipped":1064,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:31:46.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-97dc STEP: Creating a pod to test atomic-volume-subpath Mar 19 21:31:46.242: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-97dc" in namespace "subpath-2116" to be "success or failure" Mar 19 21:31:46.253: INFO: Pod "pod-subpath-test-projected-97dc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.292903ms Mar 19 21:31:48.256: INFO: Pod "pod-subpath-test-projected-97dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014421027s Mar 19 21:31:50.260: INFO: Pod "pod-subpath-test-projected-97dc": Phase="Running", Reason="", readiness=true. Elapsed: 4.018257493s Mar 19 21:31:52.263: INFO: Pod "pod-subpath-test-projected-97dc": Phase="Running", Reason="", readiness=true. Elapsed: 6.021388624s Mar 19 21:31:54.267: INFO: Pod "pod-subpath-test-projected-97dc": Phase="Running", Reason="", readiness=true. Elapsed: 8.025014288s Mar 19 21:31:56.271: INFO: Pod "pod-subpath-test-projected-97dc": Phase="Running", Reason="", readiness=true. Elapsed: 10.029734444s Mar 19 21:31:58.275: INFO: Pod "pod-subpath-test-projected-97dc": Phase="Running", Reason="", readiness=true. Elapsed: 12.033638924s Mar 19 21:32:00.279: INFO: Pod "pod-subpath-test-projected-97dc": Phase="Running", Reason="", readiness=true. Elapsed: 14.037800544s Mar 19 21:32:02.284: INFO: Pod "pod-subpath-test-projected-97dc": Phase="Running", Reason="", readiness=true. Elapsed: 16.041958319s Mar 19 21:32:04.288: INFO: Pod "pod-subpath-test-projected-97dc": Phase="Running", Reason="", readiness=true. Elapsed: 18.046206882s Mar 19 21:32:06.291: INFO: Pod "pod-subpath-test-projected-97dc": Phase="Running", Reason="", readiness=true. Elapsed: 20.049817652s Mar 19 21:32:08.296: INFO: Pod "pod-subpath-test-projected-97dc": Phase="Running", Reason="", readiness=true. Elapsed: 22.053921143s Mar 19 21:32:10.300: INFO: Pod "pod-subpath-test-projected-97dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.057858869s STEP: Saw pod success Mar 19 21:32:10.300: INFO: Pod "pod-subpath-test-projected-97dc" satisfied condition "success or failure" Mar 19 21:32:10.302: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-97dc container test-container-subpath-projected-97dc: STEP: delete the pod Mar 19 21:32:10.344: INFO: Waiting for pod pod-subpath-test-projected-97dc to disappear Mar 19 21:32:10.360: INFO: Pod pod-subpath-test-projected-97dc no longer exists STEP: Deleting pod pod-subpath-test-projected-97dc Mar 19 21:32:10.360: INFO: Deleting pod "pod-subpath-test-projected-97dc" in namespace "subpath-2116" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:32:10.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2116" for this suite. • [SLOW TEST:24.231 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":71,"skipped":1071,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:32:10.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Mar 19 21:32:14.466: INFO: Pod pod-hostip-f95e3662-f0ec-4316-a753-4e2710ed9545 has hostIP: 172.17.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:32:14.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6125" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1076,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:32:14.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-fd1c151f-1083-4fdc-bd8a-402a6fc12ac9 STEP: Creating a pod to test consume configMaps Mar 19 21:32:14.554: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-38fcca3c-69f8-40e3-9849-1e2397a48c52" in namespace "projected-3173" to be "success or failure" Mar 19 21:32:14.582: INFO: Pod "pod-projected-configmaps-38fcca3c-69f8-40e3-9849-1e2397a48c52": Phase="Pending", Reason="", readiness=false. Elapsed: 27.891119ms Mar 19 21:32:16.666: INFO: Pod "pod-projected-configmaps-38fcca3c-69f8-40e3-9849-1e2397a48c52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111767095s Mar 19 21:32:18.670: INFO: Pod "pod-projected-configmaps-38fcca3c-69f8-40e3-9849-1e2397a48c52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116126694s STEP: Saw pod success Mar 19 21:32:18.670: INFO: Pod "pod-projected-configmaps-38fcca3c-69f8-40e3-9849-1e2397a48c52" satisfied condition "success or failure" Mar 19 21:32:18.672: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-38fcca3c-69f8-40e3-9849-1e2397a48c52 container projected-configmap-volume-test: STEP: delete the pod Mar 19 21:32:18.743: INFO: Waiting for pod pod-projected-configmaps-38fcca3c-69f8-40e3-9849-1e2397a48c52 to disappear Mar 19 21:32:18.746: INFO: Pod pod-projected-configmaps-38fcca3c-69f8-40e3-9849-1e2397a48c52 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:32:18.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3173" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1110,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:32:18.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-2177f62a-da6e-4f83-b740-e606feb11f75 STEP: Creating a pod to test consume secrets Mar 19 21:32:19.345: INFO: Waiting up to 5m0s for pod "pod-secrets-0e499e14-884f-4497-95d3-957cc51a02ce" in namespace "secrets-5420" to be "success or failure" Mar 19 21:32:19.361: INFO: Pod "pod-secrets-0e499e14-884f-4497-95d3-957cc51a02ce": Phase="Pending", Reason="", readiness=false. Elapsed: 16.235095ms Mar 19 21:32:21.365: INFO: Pod "pod-secrets-0e499e14-884f-4497-95d3-957cc51a02ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020329058s Mar 19 21:32:23.369: INFO: Pod "pod-secrets-0e499e14-884f-4497-95d3-957cc51a02ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024587954s STEP: Saw pod success Mar 19 21:32:23.369: INFO: Pod "pod-secrets-0e499e14-884f-4497-95d3-957cc51a02ce" satisfied condition "success or failure" Mar 19 21:32:23.372: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-0e499e14-884f-4497-95d3-957cc51a02ce container secret-volume-test: STEP: delete the pod Mar 19 21:32:23.405: INFO: Waiting for pod pod-secrets-0e499e14-884f-4497-95d3-957cc51a02ce to disappear Mar 19 21:32:23.409: INFO: Pod pod-secrets-0e499e14-884f-4497-95d3-957cc51a02ce no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:32:23.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5420" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1132,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:32:23.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 19 21:32:24.153: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 19 21:32:26.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250344, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250344, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250344, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250344, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 21:32:29.195: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:32:29.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-101-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:32:30.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3555" for this suite. STEP: Destroying namespace "webhook-3555-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.990 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":75,"skipped":1133,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:32:30.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 19 21:32:30.466: INFO: Waiting up to 5m0s for pod "pod-71709178-f5ea-44d1-b7a5-387712e3ebcf" in namespace "emptydir-1904" to be "success or failure" Mar 19 21:32:30.487: INFO: Pod "pod-71709178-f5ea-44d1-b7a5-387712e3ebcf": Phase="Pending", Reason="", readiness=false. Elapsed: 20.658371ms Mar 19 21:32:32.492: INFO: Pod "pod-71709178-f5ea-44d1-b7a5-387712e3ebcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025257149s Mar 19 21:32:34.496: INFO: Pod "pod-71709178-f5ea-44d1-b7a5-387712e3ebcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029742754s STEP: Saw pod success Mar 19 21:32:34.496: INFO: Pod "pod-71709178-f5ea-44d1-b7a5-387712e3ebcf" satisfied condition "success or failure" Mar 19 21:32:34.500: INFO: Trying to get logs from node jerma-worker2 pod pod-71709178-f5ea-44d1-b7a5-387712e3ebcf container test-container: STEP: delete the pod Mar 19 21:32:34.519: INFO: Waiting for pod pod-71709178-f5ea-44d1-b7a5-387712e3ebcf to disappear Mar 19 21:32:34.523: INFO: Pod pod-71709178-f5ea-44d1-b7a5-387712e3ebcf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:32:34.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1904" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1157,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:32:34.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 19 21:32:38.656: INFO: &Pod{ObjectMeta:{send-events-10451860-4e1f-4020-8783-924a2e5f8b66 events-206 /api/v1/namespaces/events-206/pods/send-events-10451860-4e1f-4020-8783-924a2e5f8b66 0ff223fe-5f55-4213-9d7d-149ce5700d8a 1117690 0 2020-03-19 21:32:34 +0000 UTC map[name:foo time:615404737] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l7n25,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l7n25,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l7n25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:32:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:32:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:32:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:32:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.224,StartTime:2020-03-19 21:32:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-19 21:32:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://4103fa76b886d7a842ebca7be38a2458149ddd336223081a9c0a660a8d016823,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.224,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 19 21:32:40.662: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 19 21:32:42.665: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:32:42.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-206" for this suite. • [SLOW TEST:8.194 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":77,"skipped":1174,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:32:42.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 19 21:32:42.790: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a220e82-be13-4af8-814a-60a09f4dc2a1" in namespace "projected-4821" to be "success or failure" Mar 19 21:32:42.793: INFO: Pod "downwardapi-volume-7a220e82-be13-4af8-814a-60a09f4dc2a1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.341002ms Mar 19 21:32:44.797: INFO: Pod "downwardapi-volume-7a220e82-be13-4af8-814a-60a09f4dc2a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006911903s Mar 19 21:32:46.810: INFO: Pod "downwardapi-volume-7a220e82-be13-4af8-814a-60a09f4dc2a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019497576s STEP: Saw pod success Mar 19 21:32:46.810: INFO: Pod "downwardapi-volume-7a220e82-be13-4af8-814a-60a09f4dc2a1" satisfied condition "success or failure" Mar 19 21:32:46.812: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-7a220e82-be13-4af8-814a-60a09f4dc2a1 container client-container: STEP: delete the pod Mar 19 21:32:46.840: INFO: Waiting for pod downwardapi-volume-7a220e82-be13-4af8-814a-60a09f4dc2a1 to disappear Mar 19 21:32:46.853: INFO: Pod downwardapi-volume-7a220e82-be13-4af8-814a-60a09f4dc2a1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:32:46.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4821" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1224,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:32:46.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:33:03.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2266" for this suite. • [SLOW TEST:16.294 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":79,"skipped":1233,"failed":0} SSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:33:03.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:33:03.241: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-75bd5319-ffe2-40e7-914e-db34e308e4ee" in namespace "security-context-test-992" to be "success or failure" Mar 19 21:33:03.244: INFO: Pod "busybox-privileged-false-75bd5319-ffe2-40e7-914e-db34e308e4ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.829595ms Mar 19 21:33:05.253: INFO: Pod "busybox-privileged-false-75bd5319-ffe2-40e7-914e-db34e308e4ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01160505s Mar 19 21:33:07.257: INFO: Pod "busybox-privileged-false-75bd5319-ffe2-40e7-914e-db34e308e4ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01563953s Mar 19 21:33:07.257: INFO: Pod "busybox-privileged-false-75bd5319-ffe2-40e7-914e-db34e308e4ee" satisfied condition "success or failure" Mar 19 21:33:07.264: INFO: Got logs for pod "busybox-privileged-false-75bd5319-ffe2-40e7-914e-db34e308e4ee": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:33:07.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-992" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1239,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:33:07.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 19 21:33:07.328: INFO: Waiting up to 5m0s for pod "pod-ed93b40c-6234-405b-92e5-9b0ab5fb9635" in namespace "emptydir-6238" to be "success or failure" Mar 19 21:33:07.332: INFO: Pod "pod-ed93b40c-6234-405b-92e5-9b0ab5fb9635": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069923ms Mar 19 21:33:09.335: INFO: Pod "pod-ed93b40c-6234-405b-92e5-9b0ab5fb9635": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006602257s Mar 19 21:33:11.339: INFO: Pod "pod-ed93b40c-6234-405b-92e5-9b0ab5fb9635": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010988455s STEP: Saw pod success Mar 19 21:33:11.339: INFO: Pod "pod-ed93b40c-6234-405b-92e5-9b0ab5fb9635" satisfied condition "success or failure" Mar 19 21:33:11.342: INFO: Trying to get logs from node jerma-worker2 pod pod-ed93b40c-6234-405b-92e5-9b0ab5fb9635 container test-container: STEP: delete the pod Mar 19 21:33:11.399: INFO: Waiting for pod pod-ed93b40c-6234-405b-92e5-9b0ab5fb9635 to disappear Mar 19 21:33:11.405: INFO: Pod pod-ed93b40c-6234-405b-92e5-9b0ab5fb9635 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:33:11.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6238" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:33:11.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-6f7d3a3f-02cd-41b2-a6b3-e4de9862f089 STEP: Creating a pod to test consume configMaps Mar 19 21:33:11.490: INFO: Waiting up to 5m0s for pod "pod-configmaps-bb5d870d-1ece-4486-85fc-8dcdac624d08" in namespace "configmap-2200" to be "success or failure" Mar 19 21:33:11.494: INFO: Pod "pod-configmaps-bb5d870d-1ece-4486-85fc-8dcdac624d08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119396ms Mar 19 21:33:13.499: INFO: Pod "pod-configmaps-bb5d870d-1ece-4486-85fc-8dcdac624d08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008580801s Mar 19 21:33:15.504: INFO: Pod "pod-configmaps-bb5d870d-1ece-4486-85fc-8dcdac624d08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014249274s STEP: Saw pod success Mar 19 21:33:15.505: INFO: Pod "pod-configmaps-bb5d870d-1ece-4486-85fc-8dcdac624d08" satisfied condition "success or failure" Mar 19 21:33:15.507: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-bb5d870d-1ece-4486-85fc-8dcdac624d08 container configmap-volume-test: STEP: delete the pod Mar 19 21:33:15.519: INFO: Waiting for pod pod-configmaps-bb5d870d-1ece-4486-85fc-8dcdac624d08 to disappear Mar 19 21:33:15.537: INFO: Pod pod-configmaps-bb5d870d-1ece-4486-85fc-8dcdac624d08 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:33:15.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2200" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1289,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:33:15.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 19 21:33:15.617: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba5a87fe-42b3-4285-ad91-5803a9c22a9b" in namespace "downward-api-6853" to be "success or failure" Mar 19 21:33:15.632: INFO: Pod "downwardapi-volume-ba5a87fe-42b3-4285-ad91-5803a9c22a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.9629ms Mar 19 21:33:17.636: INFO: Pod "downwardapi-volume-ba5a87fe-42b3-4285-ad91-5803a9c22a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018909761s Mar 19 21:33:19.640: INFO: Pod "downwardapi-volume-ba5a87fe-42b3-4285-ad91-5803a9c22a9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0229716s STEP: Saw pod success Mar 19 21:33:19.640: INFO: Pod "downwardapi-volume-ba5a87fe-42b3-4285-ad91-5803a9c22a9b" satisfied condition "success or failure" Mar 19 21:33:19.643: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ba5a87fe-42b3-4285-ad91-5803a9c22a9b container client-container: STEP: delete the pod Mar 19 21:33:19.683: INFO: Waiting for pod downwardapi-volume-ba5a87fe-42b3-4285-ad91-5803a9c22a9b to disappear Mar 19 21:33:19.692: INFO: Pod downwardapi-volume-ba5a87fe-42b3-4285-ad91-5803a9c22a9b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:33:19.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6853" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1301,"failed":0} SSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:33:19.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:33:33.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7198" for this suite. • [SLOW TEST:14.104 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":84,"skipped":1307,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:33:33.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 19 21:33:34.416: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 19 21:33:36.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250414, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250414, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250414, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250414, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 21:33:39.464: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:33:39.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2892" for this suite. STEP: Destroying namespace "webhook-2892-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.918 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":85,"skipped":1316,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:33:39.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-caf8253e-95e2-4186-86d8-c8bd1394e66e STEP: Creating a pod to test consume configMaps Mar 19 21:33:39.804: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0e04b2b5-4238-4eea-b9a8-dc35b413cde4" in namespace "projected-8173" to be "success or failure" Mar 19 21:33:39.807: INFO: Pod "pod-projected-configmaps-0e04b2b5-4238-4eea-b9a8-dc35b413cde4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.01976ms Mar 19 21:33:41.811: INFO: Pod "pod-projected-configmaps-0e04b2b5-4238-4eea-b9a8-dc35b413cde4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007308603s Mar 19 21:33:43.815: INFO: Pod "pod-projected-configmaps-0e04b2b5-4238-4eea-b9a8-dc35b413cde4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011638232s STEP: Saw pod success Mar 19 21:33:43.815: INFO: Pod "pod-projected-configmaps-0e04b2b5-4238-4eea-b9a8-dc35b413cde4" satisfied condition "success or failure" Mar 19 21:33:43.818: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-0e04b2b5-4238-4eea-b9a8-dc35b413cde4 container projected-configmap-volume-test: STEP: delete the pod Mar 19 21:33:43.839: INFO: Waiting for pod pod-projected-configmaps-0e04b2b5-4238-4eea-b9a8-dc35b413cde4 to disappear Mar 19 21:33:43.857: INFO: Pod pod-projected-configmaps-0e04b2b5-4238-4eea-b9a8-dc35b413cde4 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:33:43.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8173" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:33:43.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:33:48.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-158" for this suite. • [SLOW TEST:5.101 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":87,"skipped":1382,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:33:48.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:33:49.018: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:33:50.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5878" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":88,"skipped":1382,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:33:50.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 19 21:33:54.966: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ba686ed3-ef76-4e06-b4eb-3fb0b31a7b26" Mar 19 21:33:54.966: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ba686ed3-ef76-4e06-b4eb-3fb0b31a7b26" in namespace "pods-3905" to be "terminated due to deadline exceeded" Mar 19 21:33:54.987: INFO: Pod "pod-update-activedeadlineseconds-ba686ed3-ef76-4e06-b4eb-3fb0b31a7b26": Phase="Running", Reason="", readiness=true. Elapsed: 21.592025ms Mar 19 21:33:56.992: INFO: Pod "pod-update-activedeadlineseconds-ba686ed3-ef76-4e06-b4eb-3fb0b31a7b26": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.025776777s Mar 19 21:33:56.992: INFO: Pod "pod-update-activedeadlineseconds-ba686ed3-ef76-4e06-b4eb-3fb0b31a7b26" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:33:56.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3905" for this suite. • [SLOW TEST:6.778 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1407,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:33:57.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:33:57.063: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:34:02.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2372" for this suite. • [SLOW TEST:5.981 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":90,"skipped":1407,"failed":0} SS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:34:02.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-564774e8-5815-4ec2-9b20-aa28d01fc80f STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:34:07.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2467" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1409,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:34:07.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:34:36.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8852" for this suite. • [SLOW TEST:29.216 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1442,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:34:36.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:34:36.381: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 19 21:34:39.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2453 create -f -' Mar 19 21:34:42.511: INFO: stderr: "" Mar 19 21:34:42.511: INFO: stdout: "e2e-test-crd-publish-openapi-5470-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 19 21:34:42.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2453 delete e2e-test-crd-publish-openapi-5470-crds test-foo' Mar 19 21:34:42.620: INFO: stderr: "" Mar 19 21:34:42.620: INFO: stdout: "e2e-test-crd-publish-openapi-5470-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 19 21:34:42.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2453 apply -f -' Mar 19 21:34:42.882: INFO: stderr: "" Mar 19 21:34:42.882: INFO: stdout: "e2e-test-crd-publish-openapi-5470-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 19 21:34:42.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2453 delete e2e-test-crd-publish-openapi-5470-crds test-foo' Mar 19 21:34:42.999: INFO: stderr: "" Mar 19 21:34:42.999: INFO: stdout: "e2e-test-crd-publish-openapi-5470-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 19 21:34:42.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2453 create -f -' Mar 19 21:34:43.213: INFO: rc: 1 Mar 19 21:34:43.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2453 apply -f -' Mar 19 21:34:43.472: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 19 21:34:43.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2453 create -f -' Mar 19 21:34:43.700: INFO: rc: 1 Mar 19 21:34:43.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2453 apply -f -' Mar 19 21:34:43.906: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 19 21:34:43.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5470-crds' Mar 19 21:34:44.210: INFO: stderr: "" Mar 19 21:34:44.210: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5470-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 19 21:34:44.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5470-crds.metadata' Mar 19 21:34:44.460: INFO: stderr: "" Mar 19 21:34:44.460: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5470-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 19 21:34:44.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5470-crds.spec' Mar 19 21:34:44.686: INFO: stderr: "" Mar 19 21:34:44.686: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5470-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 19 21:34:44.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5470-crds.spec.bars' Mar 19 21:34:44.898: INFO: stderr: "" Mar 19 21:34:44.898: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5470-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 19 21:34:44.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5470-crds.spec.bars2' Mar 19 21:34:45.129: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:34:47.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2453" for this suite. • [SLOW TEST:11.695 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":93,"skipped":1443,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:34:48.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-83f662db-1dca-4bcb-970c-a95a1f1d72a1 STEP: Creating a pod to test consume secrets Mar 19 21:34:48.086: INFO: Waiting up to 5m0s for pod "pod-secrets-bfb6f9e2-62ec-472f-a120-978b8d7d57b0" in namespace "secrets-3081" to be "success or failure" Mar 19 21:34:48.102: INFO: Pod "pod-secrets-bfb6f9e2-62ec-472f-a120-978b8d7d57b0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.05859ms Mar 19 21:34:50.107: INFO: Pod "pod-secrets-bfb6f9e2-62ec-472f-a120-978b8d7d57b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02052143s Mar 19 21:34:52.111: INFO: Pod "pod-secrets-bfb6f9e2-62ec-472f-a120-978b8d7d57b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024555491s STEP: Saw pod success Mar 19 21:34:52.111: INFO: Pod "pod-secrets-bfb6f9e2-62ec-472f-a120-978b8d7d57b0" satisfied condition "success or failure" Mar 19 21:34:52.114: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-bfb6f9e2-62ec-472f-a120-978b8d7d57b0 container secret-volume-test: STEP: delete the pod Mar 19 21:34:52.153: INFO: Waiting for pod pod-secrets-bfb6f9e2-62ec-472f-a120-978b8d7d57b0 to disappear Mar 19 21:34:52.200: INFO: Pod pod-secrets-bfb6f9e2-62ec-472f-a120-978b8d7d57b0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:34:52.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3081" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1495,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:34:52.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:34:52.260: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 19 21:34:52.294: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 19 21:34:57.297: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 19 21:34:57.297: INFO: Creating deployment "test-rolling-update-deployment" Mar 19 21:34:57.301: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 19 21:34:57.332: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 19 21:34:59.340: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 19 21:34:59.343: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250497, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250497, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250497, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250497, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 19 21:35:01.348: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 19 21:35:01.358: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6999 /apis/apps/v1/namespaces/deployment-6999/deployments/test-rolling-update-deployment 68e7447b-b8bf-45dd-ba15-7738cc9fa82f 1118836 1 2020-03-19 21:34:57 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003735798 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-19 21:34:57 +0000 UTC,LastTransitionTime:2020-03-19 21:34:57 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-03-19 21:35:00 +0000 UTC,LastTransitionTime:2020-03-19 21:34:57 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 19 21:35:01.362: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-6999 /apis/apps/v1/namespaces/deployment-6999/replicasets/test-rolling-update-deployment-67cf4f6444 0797b54c-c4af-4a8e-812e-89ce179fb0d4 1118825 1 2020-03-19 21:34:57 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 68e7447b-b8bf-45dd-ba15-7738cc9fa82f 0xc003735dd7 0xc003735dd8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003735e68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 19 21:35:01.362: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 19 21:35:01.362: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6999 /apis/apps/v1/namespaces/deployment-6999/replicasets/test-rolling-update-controller 7334c2f5-c1bd-436f-8351-3cdf033f3e95 1118834 2 2020-03-19 21:34:52 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 68e7447b-b8bf-45dd-ba15-7738cc9fa82f 0xc003735ca7 0xc003735ca8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003735d28 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 19 21:35:01.365: INFO: Pod "test-rolling-update-deployment-67cf4f6444-j4b4d" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-j4b4d test-rolling-update-deployment-67cf4f6444- deployment-6999 /api/v1/namespaces/deployment-6999/pods/test-rolling-update-deployment-67cf4f6444-j4b4d d5a29607-6eb2-42cb-9a6b-c165ef4e44b1 1118824 0 2020-03-19 21:34:57 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 0797b54c-c4af-4a8e-812e-89ce179fb0d4 0xc0036d8437 0xc0036d8438}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rs4hm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rs4hm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rs4hm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:34:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:35:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:35:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:34:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.19,StartTime:2020-03-19 21:34:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-19 21:34:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://72735c5e048d51cfb96ca78cc06efac71f9de91c84918a4785cc490f0a06a042,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.19,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:35:01.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6999" for this suite. • [SLOW TEST:9.162 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":95,"skipped":1515,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:35:01.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:35:01.408: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 19 21:35:04.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2659 create -f -' Mar 19 21:35:07.745: INFO: stderr: "" Mar 19 21:35:07.745: INFO: stdout: "e2e-test-crd-publish-openapi-9560-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 19 21:35:07.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2659 delete e2e-test-crd-publish-openapi-9560-crds test-cr' Mar 19 21:35:07.866: INFO: stderr: "" Mar 19 21:35:07.866: INFO: stdout: "e2e-test-crd-publish-openapi-9560-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 19 21:35:07.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2659 apply -f -' Mar 19 21:35:08.142: INFO: stderr: "" Mar 19 21:35:08.142: INFO: stdout: "e2e-test-crd-publish-openapi-9560-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 19 21:35:08.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2659 delete e2e-test-crd-publish-openapi-9560-crds test-cr' Mar 19 21:35:08.237: INFO: stderr: "" Mar 19 21:35:08.237: INFO: stdout: "e2e-test-crd-publish-openapi-9560-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 19 21:35:08.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9560-crds' Mar 19 21:35:08.484: INFO: stderr: "" Mar 19 21:35:08.484: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9560-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:35:11.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2659" for this suite. • [SLOW TEST:9.985 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":96,"skipped":1547,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:35:11.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-7h59 STEP: Creating a pod to test atomic-volume-subpath Mar 19 21:35:11.427: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7h59" in namespace "subpath-6549" to be "success or failure" Mar 19 21:35:11.464: INFO: Pod "pod-subpath-test-configmap-7h59": Phase="Pending", Reason="", readiness=false. Elapsed: 36.771172ms Mar 19 21:35:13.476: INFO: Pod "pod-subpath-test-configmap-7h59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049007823s Mar 19 21:35:15.481: INFO: Pod "pod-subpath-test-configmap-7h59": Phase="Running", Reason="", readiness=true. Elapsed: 4.05331003s Mar 19 21:35:17.485: INFO: Pod "pod-subpath-test-configmap-7h59": Phase="Running", Reason="", readiness=true. Elapsed: 6.057589668s Mar 19 21:35:19.489: INFO: Pod "pod-subpath-test-configmap-7h59": Phase="Running", Reason="", readiness=true. Elapsed: 8.061950721s Mar 19 21:35:21.493: INFO: Pod "pod-subpath-test-configmap-7h59": Phase="Running", Reason="", readiness=true. Elapsed: 10.06613245s Mar 19 21:35:23.497: INFO: Pod "pod-subpath-test-configmap-7h59": Phase="Running", Reason="", readiness=true. Elapsed: 12.070041593s Mar 19 21:35:25.501: INFO: Pod "pod-subpath-test-configmap-7h59": Phase="Running", Reason="", readiness=true. Elapsed: 14.074096659s Mar 19 21:35:27.506: INFO: Pod "pod-subpath-test-configmap-7h59": Phase="Running", Reason="", readiness=true. Elapsed: 16.078421021s Mar 19 21:35:29.510: INFO: Pod "pod-subpath-test-configmap-7h59": Phase="Running", Reason="", readiness=true. Elapsed: 18.082653991s Mar 19 21:35:31.514: INFO: Pod "pod-subpath-test-configmap-7h59": Phase="Running", Reason="", readiness=true. Elapsed: 20.086866939s Mar 19 21:35:33.518: INFO: Pod "pod-subpath-test-configmap-7h59": Phase="Running", Reason="", readiness=true. Elapsed: 22.090800634s Mar 19 21:35:35.521: INFO: Pod "pod-subpath-test-configmap-7h59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.094189643s STEP: Saw pod success Mar 19 21:35:35.521: INFO: Pod "pod-subpath-test-configmap-7h59" satisfied condition "success or failure" Mar 19 21:35:35.524: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-7h59 container test-container-subpath-configmap-7h59: STEP: delete the pod Mar 19 21:35:35.574: INFO: Waiting for pod pod-subpath-test-configmap-7h59 to disappear Mar 19 21:35:35.583: INFO: Pod pod-subpath-test-configmap-7h59 no longer exists STEP: Deleting pod pod-subpath-test-configmap-7h59 Mar 19 21:35:35.583: INFO: Deleting pod "pod-subpath-test-configmap-7h59" in namespace "subpath-6549" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:35:35.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6549" for this suite. • [SLOW TEST:24.235 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":97,"skipped":1548,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:35:35.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 19 21:35:36.087: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 19 21:35:38.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250536, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250536, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250536, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720250536, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 21:35:41.128: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:35:41.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2613" for this suite. STEP: Destroying namespace "webhook-2613-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.638 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":98,"skipped":1556,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:35:41.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-4d8ee4ce-d067-4f00-bac0-b3fd7a64c4a3 STEP: Creating configMap with name cm-test-opt-upd-00b3cb90-7cc6-449b-a1eb-f5e3582b3700 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-4d8ee4ce-d067-4f00-bac0-b3fd7a64c4a3 STEP: Updating configmap cm-test-opt-upd-00b3cb90-7cc6-449b-a1eb-f5e3582b3700 STEP: Creating configMap with name cm-test-opt-create-201c36f5-8eb3-4247-85f2-58b3945e42b4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:36:59.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4916" for this suite. • [SLOW TEST:78.521 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1564,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:36:59.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 19 21:36:59.826: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a4c35fe-b9b7-45b9-969d-513943073bb9" in namespace "downward-api-8082" to be "success or failure" Mar 19 21:36:59.830: INFO: Pod "downwardapi-volume-1a4c35fe-b9b7-45b9-969d-513943073bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.466717ms Mar 19 21:37:01.834: INFO: Pod "downwardapi-volume-1a4c35fe-b9b7-45b9-969d-513943073bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008054826s Mar 19 21:37:03.839: INFO: Pod "downwardapi-volume-1a4c35fe-b9b7-45b9-969d-513943073bb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012636094s STEP: Saw pod success Mar 19 21:37:03.839: INFO: Pod "downwardapi-volume-1a4c35fe-b9b7-45b9-969d-513943073bb9" satisfied condition "success or failure" Mar 19 21:37:03.842: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1a4c35fe-b9b7-45b9-969d-513943073bb9 container client-container: STEP: delete the pod Mar 19 21:37:03.874: INFO: Waiting for pod downwardapi-volume-1a4c35fe-b9b7-45b9-969d-513943073bb9 to disappear Mar 19 21:37:03.878: INFO: Pod downwardapi-volume-1a4c35fe-b9b7-45b9-969d-513943073bb9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:37:03.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8082" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1588,"failed":0} SS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:37:03.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 19 21:37:14.034: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6096 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 21:37:14.034: INFO: >>> kubeConfig: /root/.kube/config I0319 21:37:14.071090 6 log.go:172] (0xc001738000) (0xc001a4f180) Create stream I0319 21:37:14.071126 6 log.go:172] (0xc001738000) (0xc001a4f180) Stream added, broadcasting: 1 I0319 21:37:14.073533 6 log.go:172] (0xc001738000) Reply frame received for 1 I0319 21:37:14.073586 6 log.go:172] (0xc001738000) (0xc0019dc140) Create stream I0319 21:37:14.073606 6 log.go:172] (0xc001738000) (0xc0019dc140) Stream added, broadcasting: 3 I0319 21:37:14.074770 6 log.go:172] (0xc001738000) Reply frame received for 3 I0319 21:37:14.074814 6 log.go:172] (0xc001738000) (0xc001ae4960) Create stream I0319 21:37:14.074827 6 log.go:172] (0xc001738000) (0xc001ae4960) Stream added, broadcasting: 5 I0319 21:37:14.075791 6 log.go:172] (0xc001738000) Reply frame received for 5 I0319 21:37:14.164712 6 log.go:172] (0xc001738000) Data frame received for 3 I0319 21:37:14.164750 6 log.go:172] (0xc0019dc140) (3) Data frame handling I0319 21:37:14.164761 6 log.go:172] (0xc0019dc140) (3) Data frame sent I0319 21:37:14.164767 6 log.go:172] (0xc001738000) Data frame received for 3 I0319 21:37:14.164775 6 log.go:172] (0xc0019dc140) (3) Data frame handling I0319 21:37:14.164850 6 log.go:172] (0xc001738000) Data frame received for 5 I0319 21:37:14.164893 6 log.go:172] (0xc001ae4960) (5) Data frame handling I0319 21:37:14.166137 6 log.go:172] (0xc001738000) Data frame received for 1 I0319 21:37:14.166166 6 log.go:172] (0xc001a4f180) (1) Data frame handling I0319 21:37:14.166180 6 log.go:172] (0xc001a4f180) (1) Data frame sent I0319 21:37:14.166196 6 log.go:172] (0xc001738000) (0xc001a4f180) Stream removed, broadcasting: 1 I0319 21:37:14.166213 6 log.go:172] (0xc001738000) Go away received I0319 21:37:14.166438 6 log.go:172] (0xc001738000) (0xc001a4f180) Stream removed, broadcasting: 1 I0319 21:37:14.166468 6 log.go:172] (0xc001738000) (0xc0019dc140) Stream removed, broadcasting: 3 I0319 21:37:14.166487 6 log.go:172] (0xc001738000) (0xc001ae4960) Stream removed, broadcasting: 5 Mar 19 21:37:14.166: INFO: Exec stderr: "" Mar 19 21:37:14.166: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6096 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 21:37:14.166: INFO: >>> kubeConfig: /root/.kube/config I0319 21:37:14.202999 6 log.go:172] (0xc0052300b0) (0xc000e8f720) Create stream I0319 21:37:14.203030 6 log.go:172] (0xc0052300b0) (0xc000e8f720) Stream added, broadcasting: 1 I0319 21:37:14.205712 6 log.go:172] (0xc0052300b0) Reply frame received for 1 I0319 21:37:14.205763 6 log.go:172] (0xc0052300b0) (0xc00129ee60) Create stream I0319 21:37:14.205778 6 log.go:172] (0xc0052300b0) (0xc00129ee60) Stream added, broadcasting: 3 I0319 21:37:14.206794 6 log.go:172] (0xc0052300b0) Reply frame received for 3 I0319 21:37:14.206843 6 log.go:172] (0xc0052300b0) (0xc00129ef00) Create stream I0319 21:37:14.206862 6 log.go:172] (0xc0052300b0) (0xc00129ef00) Stream added, broadcasting: 5 I0319 21:37:14.207764 6 log.go:172] (0xc0052300b0) Reply frame received for 5 I0319 21:37:14.275956 6 log.go:172] (0xc0052300b0) Data frame received for 5 I0319 21:37:14.276006 6 log.go:172] (0xc00129ef00) (5) Data frame handling I0319 21:37:14.276030 6 log.go:172] (0xc0052300b0) Data frame received for 3 I0319 21:37:14.276044 6 log.go:172] (0xc00129ee60) (3) Data frame handling I0319 21:37:14.276058 6 log.go:172] (0xc00129ee60) (3) Data frame sent I0319 21:37:14.276070 6 log.go:172] (0xc0052300b0) Data frame received for 3 I0319 21:37:14.276080 6 log.go:172] (0xc00129ee60) (3) Data frame handling I0319 21:37:14.277636 6 log.go:172] (0xc0052300b0) Data frame received for 1 I0319 21:37:14.277697 6 log.go:172] (0xc000e8f720) (1) Data frame handling I0319 21:37:14.277725 6 log.go:172] (0xc000e8f720) (1) Data frame sent I0319 21:37:14.277748 6 log.go:172] (0xc0052300b0) (0xc000e8f720) Stream removed, broadcasting: 1 I0319 21:37:14.277798 6 log.go:172] (0xc0052300b0) Go away received I0319 21:37:14.277880 6 log.go:172] (0xc0052300b0) (0xc000e8f720) Stream removed, broadcasting: 1 I0319 21:37:14.277924 6 log.go:172] (0xc0052300b0) (0xc00129ee60) Stream removed, broadcasting: 3 I0319 21:37:14.277954 6 log.go:172] (0xc0052300b0) (0xc00129ef00) Stream removed, broadcasting: 5 Mar 19 21:37:14.277: INFO: Exec stderr: "" Mar 19 21:37:14.278: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6096 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 21:37:14.278: INFO: >>> kubeConfig: /root/.kube/config I0319 21:37:14.316880 6 log.go:172] (0xc005230420) (0xc000e8f7c0) Create stream I0319 21:37:14.316921 6 log.go:172] (0xc005230420) (0xc000e8f7c0) Stream added, broadcasting: 1 I0319 21:37:14.321278 6 log.go:172] (0xc005230420) Reply frame received for 1 I0319 21:37:14.321332 6 log.go:172] (0xc005230420) (0xc001a4f220) Create stream I0319 21:37:14.321348 6 log.go:172] (0xc005230420) (0xc001a4f220) Stream added, broadcasting: 3 I0319 21:37:14.322922 6 log.go:172] (0xc005230420) Reply frame received for 3 I0319 21:37:14.322965 6 log.go:172] (0xc005230420) (0xc001ae4be0) Create stream I0319 21:37:14.322978 6 log.go:172] (0xc005230420) (0xc001ae4be0) Stream added, broadcasting: 5 I0319 21:37:14.323909 6 log.go:172] (0xc005230420) Reply frame received for 5 I0319 21:37:14.372981 6 log.go:172] (0xc005230420) Data frame received for 5 I0319 21:37:14.373035 6 log.go:172] (0xc005230420) Data frame received for 3 I0319 21:37:14.373102 6 log.go:172] (0xc001a4f220) (3) Data frame handling I0319 21:37:14.373299 6 log.go:172] (0xc001a4f220) (3) Data frame sent I0319 21:37:14.373334 6 log.go:172] (0xc001ae4be0) (5) Data frame handling I0319 21:37:14.373368 6 log.go:172] (0xc005230420) Data frame received for 3 I0319 21:37:14.373401 6 log.go:172] (0xc001a4f220) (3) Data frame handling I0319 21:37:14.375315 6 log.go:172] (0xc005230420) Data frame received for 1 I0319 21:37:14.375357 6 log.go:172] (0xc000e8f7c0) (1) Data frame handling I0319 21:37:14.375381 6 log.go:172] (0xc000e8f7c0) (1) Data frame sent I0319 21:37:14.375409 6 log.go:172] (0xc005230420) (0xc000e8f7c0) Stream removed, broadcasting: 1 I0319 21:37:14.375437 6 log.go:172] (0xc005230420) Go away received I0319 21:37:14.375578 6 log.go:172] (0xc005230420) (0xc000e8f7c0) Stream removed, broadcasting: 1 I0319 21:37:14.375610 6 log.go:172] (0xc005230420) (0xc001a4f220) Stream removed, broadcasting: 3 I0319 21:37:14.375634 6 log.go:172] (0xc005230420) (0xc001ae4be0) Stream removed, broadcasting: 5 Mar 19 21:37:14.375: INFO: Exec stderr: "" Mar 19 21:37:14.375: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6096 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 21:37:14.375: INFO: >>> kubeConfig: /root/.kube/config I0319 21:37:14.410266 6 log.go:172] (0xc002a186e0) (0xc00129f9a0) Create stream I0319 21:37:14.410298 6 log.go:172] (0xc002a186e0) (0xc00129f9a0) Stream added, broadcasting: 1 I0319 21:37:14.412213 6 log.go:172] (0xc002a186e0) Reply frame received for 1 I0319 21:37:14.412259 6 log.go:172] (0xc002a186e0) (0xc00129fb80) Create stream I0319 21:37:14.412276 6 log.go:172] (0xc002a186e0) (0xc00129fb80) Stream added, broadcasting: 3 I0319 21:37:14.413611 6 log.go:172] (0xc002a186e0) Reply frame received for 3 I0319 21:37:14.413674 6 log.go:172] (0xc002a186e0) (0xc00129fea0) Create stream I0319 21:37:14.413696 6 log.go:172] (0xc002a186e0) (0xc00129fea0) Stream added, broadcasting: 5 I0319 21:37:14.414692 6 log.go:172] (0xc002a186e0) Reply frame received for 5 I0319 21:37:14.469027 6 log.go:172] (0xc002a186e0) Data frame received for 3 I0319 21:37:14.469062 6 log.go:172] (0xc00129fb80) (3) Data frame handling I0319 21:37:14.469073 6 log.go:172] (0xc00129fb80) (3) Data frame sent I0319 21:37:14.469086 6 log.go:172] (0xc002a186e0) Data frame received for 5 I0319 21:37:14.469100 6 log.go:172] (0xc00129fea0) (5) Data frame handling I0319 21:37:14.469304 6 log.go:172] (0xc002a186e0) Data frame received for 3 I0319 21:37:14.469330 6 log.go:172] (0xc00129fb80) (3) Data frame handling I0319 21:37:14.471029 6 log.go:172] (0xc002a186e0) Data frame received for 1 I0319 21:37:14.471039 6 log.go:172] (0xc00129f9a0) (1) Data frame handling I0319 21:37:14.471045 6 log.go:172] (0xc00129f9a0) (1) Data frame sent I0319 21:37:14.471053 6 log.go:172] (0xc002a186e0) (0xc00129f9a0) Stream removed, broadcasting: 1 I0319 21:37:14.471108 6 log.go:172] (0xc002a186e0) (0xc00129f9a0) Stream removed, broadcasting: 1 I0319 21:37:14.471117 6 log.go:172] (0xc002a186e0) (0xc00129fb80) Stream removed, broadcasting: 3 I0319 21:37:14.471124 6 log.go:172] (0xc002a186e0) (0xc00129fea0) Stream removed, broadcasting: 5 Mar 19 21:37:14.471: INFO: Exec stderr: "" I0319 21:37:14.471150 6 log.go:172] (0xc002a186e0) Go away received STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 19 21:37:14.471: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6096 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 21:37:14.471: INFO: >>> kubeConfig: /root/.kube/config I0319 21:37:14.508724 6 log.go:172] (0xc001e704d0) (0xc0019dc500) Create stream I0319 21:37:14.508752 6 log.go:172] (0xc001e704d0) (0xc0019dc500) Stream added, broadcasting: 1 I0319 21:37:14.510542 6 log.go:172] (0xc001e704d0) Reply frame received for 1 I0319 21:37:14.510573 6 log.go:172] (0xc001e704d0) (0xc000e8f9a0) Create stream I0319 21:37:14.510585 6 log.go:172] (0xc001e704d0) (0xc000e8f9a0) Stream added, broadcasting: 3 I0319 21:37:14.511606 6 log.go:172] (0xc001e704d0) Reply frame received for 3 I0319 21:37:14.511663 6 log.go:172] (0xc001e704d0) (0xc00234e000) Create stream I0319 21:37:14.511689 6 log.go:172] (0xc001e704d0) (0xc00234e000) Stream added, broadcasting: 5 I0319 21:37:14.512753 6 log.go:172] (0xc001e704d0) Reply frame received for 5 I0319 21:37:14.572311 6 log.go:172] (0xc001e704d0) Data frame received for 3 I0319 21:37:14.572341 6 log.go:172] (0xc000e8f9a0) (3) Data frame handling I0319 21:37:14.572360 6 log.go:172] (0xc000e8f9a0) (3) Data frame sent I0319 21:37:14.572372 6 log.go:172] (0xc001e704d0) Data frame received for 3 I0319 21:37:14.572379 6 log.go:172] (0xc000e8f9a0) (3) Data frame handling I0319 21:37:14.572421 6 log.go:172] (0xc001e704d0) Data frame received for 5 I0319 21:37:14.572449 6 log.go:172] (0xc00234e000) (5) Data frame handling I0319 21:37:14.574024 6 log.go:172] (0xc001e704d0) Data frame received for 1 I0319 21:37:14.574069 6 log.go:172] (0xc0019dc500) (1) Data frame handling I0319 21:37:14.574096 6 log.go:172] (0xc0019dc500) (1) Data frame sent I0319 21:37:14.574120 6 log.go:172] (0xc001e704d0) (0xc0019dc500) Stream removed, broadcasting: 1 I0319 21:37:14.574163 6 log.go:172] (0xc001e704d0) Go away received I0319 21:37:14.574243 6 log.go:172] (0xc001e704d0) (0xc0019dc500) Stream removed, broadcasting: 1 I0319 21:37:14.574267 6 log.go:172] (0xc001e704d0) (0xc000e8f9a0) Stream removed, broadcasting: 3 I0319 21:37:14.574278 6 log.go:172] (0xc001e704d0) (0xc00234e000) Stream removed, broadcasting: 5 Mar 19 21:37:14.574: INFO: Exec stderr: "" Mar 19 21:37:14.574: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6096 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 21:37:14.574: INFO: >>> kubeConfig: /root/.kube/config I0319 21:37:14.611739 6 log.go:172] (0xc002a189a0) (0xc00234e1e0) Create stream I0319 21:37:14.611782 6 log.go:172] (0xc002a189a0) (0xc00234e1e0) Stream added, broadcasting: 1 I0319 21:37:14.614490 6 log.go:172] (0xc002a189a0) Reply frame received for 1 I0319 21:37:14.614536 6 log.go:172] (0xc002a189a0) (0xc00234e320) Create stream I0319 21:37:14.614550 6 log.go:172] (0xc002a189a0) (0xc00234e320) Stream added, broadcasting: 3 I0319 21:37:14.615439 6 log.go:172] (0xc002a189a0) Reply frame received for 3 I0319 21:37:14.615463 6 log.go:172] (0xc002a189a0) (0xc00234e500) Create stream I0319 21:37:14.615471 6 log.go:172] (0xc002a189a0) (0xc00234e500) Stream added, broadcasting: 5 I0319 21:37:14.616201 6 log.go:172] (0xc002a189a0) Reply frame received for 5 I0319 21:37:14.682430 6 log.go:172] (0xc002a189a0) Data frame received for 5 I0319 21:37:14.682452 6 log.go:172] (0xc00234e500) (5) Data frame handling I0319 21:37:14.682480 6 log.go:172] (0xc002a189a0) Data frame received for 3 I0319 21:37:14.682506 6 log.go:172] (0xc00234e320) (3) Data frame handling I0319 21:37:14.682520 6 log.go:172] (0xc00234e320) (3) Data frame sent I0319 21:37:14.682527 6 log.go:172] (0xc002a189a0) Data frame received for 3 I0319 21:37:14.682535 6 log.go:172] (0xc00234e320) (3) Data frame handling I0319 21:37:14.683835 6 log.go:172] (0xc002a189a0) Data frame received for 1 I0319 21:37:14.683853 6 log.go:172] (0xc00234e1e0) (1) Data frame handling I0319 21:37:14.683860 6 log.go:172] (0xc00234e1e0) (1) Data frame sent I0319 21:37:14.683869 6 log.go:172] (0xc002a189a0) (0xc00234e1e0) Stream removed, broadcasting: 1 I0319 21:37:14.683912 6 log.go:172] (0xc002a189a0) Go away received I0319 21:37:14.683943 6 log.go:172] (0xc002a189a0) (0xc00234e1e0) Stream removed, broadcasting: 1 I0319 21:37:14.683959 6 log.go:172] (0xc002a189a0) (0xc00234e320) Stream removed, broadcasting: 3 I0319 21:37:14.683974 6 log.go:172] (0xc002a189a0) (0xc00234e500) Stream removed, broadcasting: 5 Mar 19 21:37:14.683: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 19 21:37:14.684: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6096 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 21:37:14.684: INFO: >>> kubeConfig: /root/.kube/config I0319 21:37:14.717674 6 log.go:172] (0xc002a18d10) (0xc00234e780) Create stream I0319 21:37:14.717699 6 log.go:172] (0xc002a18d10) (0xc00234e780) Stream added, broadcasting: 1 I0319 21:37:14.719682 6 log.go:172] (0xc002a18d10) Reply frame received for 1 I0319 21:37:14.719713 6 log.go:172] (0xc002a18d10) (0xc00234e8c0) Create stream I0319 21:37:14.719724 6 log.go:172] (0xc002a18d10) (0xc00234e8c0) Stream added, broadcasting: 3 I0319 21:37:14.720701 6 log.go:172] (0xc002a18d10) Reply frame received for 3 I0319 21:37:14.720742 6 log.go:172] (0xc002a18d10) (0xc00234ed20) Create stream I0319 21:37:14.720756 6 log.go:172] (0xc002a18d10) (0xc00234ed20) Stream added, broadcasting: 5 I0319 21:37:14.721874 6 log.go:172] (0xc002a18d10) Reply frame received for 5 I0319 21:37:14.788844 6 log.go:172] (0xc002a18d10) Data frame received for 3 I0319 21:37:14.788893 6 log.go:172] (0xc00234e8c0) (3) Data frame handling I0319 21:37:14.788911 6 log.go:172] (0xc00234e8c0) (3) Data frame sent I0319 21:37:14.788923 6 log.go:172] (0xc002a18d10) Data frame received for 3 I0319 21:37:14.788945 6 log.go:172] (0xc00234e8c0) (3) Data frame handling I0319 21:37:14.788976 6 log.go:172] (0xc002a18d10) Data frame received for 5 I0319 21:37:14.789000 6 log.go:172] (0xc00234ed20) (5) Data frame handling I0319 21:37:14.790373 6 log.go:172] (0xc002a18d10) Data frame received for 1 I0319 21:37:14.790399 6 log.go:172] (0xc00234e780) (1) Data frame handling I0319 21:37:14.790414 6 log.go:172] (0xc00234e780) (1) Data frame sent I0319 21:37:14.790436 6 log.go:172] (0xc002a18d10) (0xc00234e780) Stream removed, broadcasting: 1 I0319 21:37:14.790460 6 log.go:172] (0xc002a18d10) Go away received I0319 21:37:14.790528 6 log.go:172] (0xc002a18d10) (0xc00234e780) Stream removed, broadcasting: 1 I0319 21:37:14.790547 6 log.go:172] (0xc002a18d10) (0xc00234e8c0) Stream removed, broadcasting: 3 I0319 21:37:14.790556 6 log.go:172] (0xc002a18d10) (0xc00234ed20) Stream removed, broadcasting: 5 Mar 19 21:37:14.790: INFO: Exec stderr: "" Mar 19 21:37:14.790: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6096 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 21:37:14.790: INFO: >>> kubeConfig: /root/.kube/config I0319 21:37:14.817346 6 log.go:172] (0xc001649d90) (0xc001ae5180) Create stream I0319 21:37:14.817371 6 log.go:172] (0xc001649d90) (0xc001ae5180) Stream added, broadcasting: 1 I0319 21:37:14.819185 6 log.go:172] (0xc001649d90) Reply frame received for 1 I0319 21:37:14.819223 6 log.go:172] (0xc001649d90) (0xc001ae52c0) Create stream I0319 21:37:14.819235 6 log.go:172] (0xc001649d90) (0xc001ae52c0) Stream added, broadcasting: 3 I0319 21:37:14.820142 6 log.go:172] (0xc001649d90) Reply frame received for 3 I0319 21:37:14.820171 6 log.go:172] (0xc001649d90) (0xc001a4f680) Create stream I0319 21:37:14.820182 6 log.go:172] (0xc001649d90) (0xc001a4f680) Stream added, broadcasting: 5 I0319 21:37:14.821099 6 log.go:172] (0xc001649d90) Reply frame received for 5 I0319 21:37:14.878998 6 log.go:172] (0xc001649d90) Data frame received for 5 I0319 21:37:14.879028 6 log.go:172] (0xc001a4f680) (5) Data frame handling I0319 21:37:14.879047 6 log.go:172] (0xc001649d90) Data frame received for 3 I0319 21:37:14.879060 6 log.go:172] (0xc001ae52c0) (3) Data frame handling I0319 21:37:14.879068 6 log.go:172] (0xc001ae52c0) (3) Data frame sent I0319 21:37:14.879077 6 log.go:172] (0xc001649d90) Data frame received for 3 I0319 21:37:14.879085 6 log.go:172] (0xc001ae52c0) (3) Data frame handling I0319 21:37:14.880084 6 log.go:172] (0xc001649d90) Data frame received for 1 I0319 21:37:14.880123 6 log.go:172] (0xc001ae5180) (1) Data frame handling I0319 21:37:14.880158 6 log.go:172] (0xc001ae5180) (1) Data frame sent I0319 21:37:14.880180 6 log.go:172] (0xc001649d90) (0xc001ae5180) Stream removed, broadcasting: 1 I0319 21:37:14.880209 6 log.go:172] (0xc001649d90) Go away received I0319 21:37:14.880311 6 log.go:172] (0xc001649d90) (0xc001ae5180) Stream removed, broadcasting: 1 I0319 21:37:14.880336 6 log.go:172] (0xc001649d90) (0xc001ae52c0) Stream removed, broadcasting: 3 I0319 21:37:14.880357 6 log.go:172] (0xc001649d90) (0xc001a4f680) Stream removed, broadcasting: 5 Mar 19 21:37:14.880: INFO: Exec stderr: "" Mar 19 21:37:14.880: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6096 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 21:37:14.880: INFO: >>> kubeConfig: /root/.kube/config I0319 21:37:14.928993 6 log.go:172] (0xc005230b00) (0xc002a10320) Create stream I0319 21:37:14.929031 6 log.go:172] (0xc005230b00) (0xc002a10320) Stream added, broadcasting: 1 I0319 21:37:14.931294 6 log.go:172] (0xc005230b00) Reply frame received for 1 I0319 21:37:14.931333 6 log.go:172] (0xc005230b00) (0xc001a4f720) Create stream I0319 21:37:14.931347 6 log.go:172] (0xc005230b00) (0xc001a4f720) Stream added, broadcasting: 3 I0319 21:37:14.932489 6 log.go:172] (0xc005230b00) Reply frame received for 3 I0319 21:37:14.932534 6 log.go:172] (0xc005230b00) (0xc0019dc6e0) Create stream I0319 21:37:14.932550 6 log.go:172] (0xc005230b00) (0xc0019dc6e0) Stream added, broadcasting: 5 I0319 21:37:14.933690 6 log.go:172] (0xc005230b00) Reply frame received for 5 I0319 21:37:14.999122 6 log.go:172] (0xc005230b00) Data frame received for 5 I0319 21:37:14.999192 6 log.go:172] (0xc0019dc6e0) (5) Data frame handling I0319 21:37:14.999243 6 log.go:172] (0xc005230b00) Data frame received for 3 I0319 21:37:14.999281 6 log.go:172] (0xc001a4f720) (3) Data frame handling I0319 21:37:14.999319 6 log.go:172] (0xc001a4f720) (3) Data frame sent I0319 21:37:14.999344 6 log.go:172] (0xc005230b00) Data frame received for 3 I0319 21:37:14.999365 6 log.go:172] (0xc001a4f720) (3) Data frame handling I0319 21:37:15.001312 6 log.go:172] (0xc005230b00) Data frame received for 1 I0319 21:37:15.001351 6 log.go:172] (0xc002a10320) (1) Data frame handling I0319 21:37:15.001386 6 log.go:172] (0xc002a10320) (1) Data frame sent I0319 21:37:15.001598 6 log.go:172] (0xc005230b00) (0xc002a10320) Stream removed, broadcasting: 1 I0319 21:37:15.001658 6 log.go:172] (0xc005230b00) Go away received I0319 21:37:15.001746 6 log.go:172] (0xc005230b00) (0xc002a10320) Stream removed, broadcasting: 1 I0319 21:37:15.001774 6 log.go:172] (0xc005230b00) (0xc001a4f720) Stream removed, broadcasting: 3 I0319 21:37:15.001796 6 log.go:172] (0xc005230b00) (0xc0019dc6e0) Stream removed, broadcasting: 5 Mar 19 21:37:15.001: INFO: Exec stderr: "" Mar 19 21:37:15.001: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6096 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 21:37:15.001: INFO: >>> kubeConfig: /root/.kube/config I0319 21:37:15.038338 6 log.go:172] (0xc001e70bb0) (0xc0019dcaa0) Create stream I0319 21:37:15.038365 6 log.go:172] (0xc001e70bb0) (0xc0019dcaa0) Stream added, broadcasting: 1 I0319 21:37:15.040167 6 log.go:172] (0xc001e70bb0) Reply frame received for 1 I0319 21:37:15.040218 6 log.go:172] (0xc001e70bb0) (0xc002a10460) Create stream I0319 21:37:15.040232 6 log.go:172] (0xc001e70bb0) (0xc002a10460) Stream added, broadcasting: 3 I0319 21:37:15.041056 6 log.go:172] (0xc001e70bb0) Reply frame received for 3 I0319 21:37:15.041094 6 log.go:172] (0xc001e70bb0) (0xc002a105a0) Create stream I0319 21:37:15.041190 6 log.go:172] (0xc001e70bb0) (0xc002a105a0) Stream added, broadcasting: 5 I0319 21:37:15.042094 6 log.go:172] (0xc001e70bb0) Reply frame received for 5 I0319 21:37:15.110679 6 log.go:172] (0xc001e70bb0) Data frame received for 5 I0319 21:37:15.110721 6 log.go:172] (0xc002a105a0) (5) Data frame handling I0319 21:37:15.110758 6 log.go:172] (0xc001e70bb0) Data frame received for 3 I0319 21:37:15.110774 6 log.go:172] (0xc002a10460) (3) Data frame handling I0319 21:37:15.110791 6 log.go:172] (0xc002a10460) (3) Data frame sent I0319 21:37:15.110806 6 log.go:172] (0xc001e70bb0) Data frame received for 3 I0319 21:37:15.110820 6 log.go:172] (0xc002a10460) (3) Data frame handling I0319 21:37:15.112251 6 log.go:172] (0xc001e70bb0) Data frame received for 1 I0319 21:37:15.112294 6 log.go:172] (0xc0019dcaa0) (1) Data frame handling I0319 21:37:15.112318 6 log.go:172] (0xc0019dcaa0) (1) Data frame sent I0319 21:37:15.112337 6 log.go:172] (0xc001e70bb0) (0xc0019dcaa0) Stream removed, broadcasting: 1 I0319 21:37:15.112365 6 log.go:172] (0xc001e70bb0) Go away received I0319 21:37:15.112468 6 log.go:172] (0xc001e70bb0) (0xc0019dcaa0) Stream removed, broadcasting: 1 I0319 21:37:15.112494 6 log.go:172] (0xc001e70bb0) (0xc002a10460) Stream removed, broadcasting: 3 I0319 21:37:15.112509 6 log.go:172] (0xc001e70bb0) (0xc002a105a0) Stream removed, broadcasting: 5 Mar 19 21:37:15.112: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:37:15.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6096" for this suite. • [SLOW TEST:11.251 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1590,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:37:15.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:37:15.201: INFO: Creating ReplicaSet my-hostname-basic-a830dd57-4cc7-4bd8-bbde-c1feee1ee75c Mar 19 21:37:15.220: INFO: Pod name my-hostname-basic-a830dd57-4cc7-4bd8-bbde-c1feee1ee75c: Found 0 pods out of 1 Mar 19 21:37:20.240: INFO: Pod name my-hostname-basic-a830dd57-4cc7-4bd8-bbde-c1feee1ee75c: Found 1 pods out of 1 Mar 19 21:37:20.240: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-a830dd57-4cc7-4bd8-bbde-c1feee1ee75c" is running Mar 19 21:37:20.256: INFO: Pod "my-hostname-basic-a830dd57-4cc7-4bd8-bbde-c1feee1ee75c-f5qnd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-19 21:37:15 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-19 21:37:18 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-19 21:37:18 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-19 21:37:15 +0000 UTC Reason: Message:}]) Mar 19 21:37:20.256: INFO: Trying to dial the pod Mar 19 21:37:25.268: INFO: Controller my-hostname-basic-a830dd57-4cc7-4bd8-bbde-c1feee1ee75c: Got expected result from replica 1 [my-hostname-basic-a830dd57-4cc7-4bd8-bbde-c1feee1ee75c-f5qnd]: "my-hostname-basic-a830dd57-4cc7-4bd8-bbde-c1feee1ee75c-f5qnd", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:37:25.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-424" for this suite. • [SLOW TEST:10.135 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":102,"skipped":1630,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:37:25.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:37:25.332: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:37:26.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7980" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":103,"skipped":1635,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:37:26.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4133 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4133 STEP: Creating statefulset with conflicting port in namespace statefulset-4133 STEP: Waiting until pod test-pod will start running in namespace statefulset-4133 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4133 Mar 19 21:37:30.563: INFO: Observed stateful pod in namespace: statefulset-4133, name: ss-0, uid: 3af2cf22-8027-4ad1-85bb-02c64af73009, status phase: Failed. Waiting for statefulset controller to delete. Mar 19 21:37:30.580: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4133 STEP: Removing pod with conflicting port in namespace statefulset-4133 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4133 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 19 21:37:34.671: INFO: Deleting all statefulset in ns statefulset-4133 Mar 19 21:37:34.674: INFO: Scaling statefulset ss to 0 Mar 19 21:37:54.692: INFO: Waiting for statefulset status.replicas updated to 0 Mar 19 21:37:54.695: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:37:54.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4133" for this suite. • [SLOW TEST:28.331 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":104,"skipped":1651,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:37:54.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 19 21:37:54.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3315' Mar 19 21:37:55.114: INFO: stderr: "" Mar 19 21:37:55.114: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 19 21:37:55.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3315' Mar 19 21:37:55.289: INFO: stderr: "" Mar 19 21:37:55.289: INFO: stdout: "update-demo-nautilus-l4jj9 update-demo-nautilus-vhgld " Mar 19 21:37:55.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l4jj9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3315' Mar 19 21:37:55.381: INFO: stderr: "" Mar 19 21:37:55.381: INFO: stdout: "" Mar 19 21:37:55.381: INFO: update-demo-nautilus-l4jj9 is created but not running Mar 19 21:38:00.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3315' Mar 19 21:38:00.479: INFO: stderr: "" Mar 19 21:38:00.479: INFO: stdout: "update-demo-nautilus-l4jj9 update-demo-nautilus-vhgld " Mar 19 21:38:00.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l4jj9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3315' Mar 19 21:38:00.566: INFO: stderr: "" Mar 19 21:38:00.566: INFO: stdout: "true" Mar 19 21:38:00.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l4jj9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3315' Mar 19 21:38:00.656: INFO: stderr: "" Mar 19 21:38:00.656: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 19 21:38:00.656: INFO: validating pod update-demo-nautilus-l4jj9 Mar 19 21:38:00.660: INFO: got data: { "image": "nautilus.jpg" } Mar 19 21:38:00.660: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 19 21:38:00.660: INFO: update-demo-nautilus-l4jj9 is verified up and running Mar 19 21:38:00.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vhgld -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3315' Mar 19 21:38:00.741: INFO: stderr: "" Mar 19 21:38:00.741: INFO: stdout: "true" Mar 19 21:38:00.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vhgld -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3315' Mar 19 21:38:00.839: INFO: stderr: "" Mar 19 21:38:00.839: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 19 21:38:00.839: INFO: validating pod update-demo-nautilus-vhgld Mar 19 21:38:00.843: INFO: got data: { "image": "nautilus.jpg" } Mar 19 21:38:00.843: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 19 21:38:00.843: INFO: update-demo-nautilus-vhgld is verified up and running STEP: using delete to clean up resources Mar 19 21:38:00.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3315' Mar 19 21:38:00.943: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 19 21:38:00.943: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 19 21:38:00.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3315' Mar 19 21:38:01.076: INFO: stderr: "No resources found in kubectl-3315 namespace.\n" Mar 19 21:38:01.076: INFO: stdout: "" Mar 19 21:38:01.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3315 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 19 21:38:01.171: INFO: stderr: "" Mar 19 21:38:01.171: INFO: stdout: "update-demo-nautilus-l4jj9\nupdate-demo-nautilus-vhgld\n" Mar 19 21:38:01.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3315' Mar 19 21:38:01.778: INFO: stderr: "No resources found in kubectl-3315 namespace.\n" Mar 19 21:38:01.778: INFO: stdout: "" Mar 19 21:38:01.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3315 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 19 21:38:01.948: INFO: stderr: "" Mar 19 21:38:01.948: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:38:01.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3315" for this suite. • [SLOW TEST:7.243 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":105,"skipped":1676,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:38:01.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-03eb43f4-cab8-4179-b618-1eab886918f7 in namespace container-probe-1537 Mar 19 21:38:06.075: INFO: Started pod busybox-03eb43f4-cab8-4179-b618-1eab886918f7 in namespace container-probe-1537 STEP: checking the pod's current state and verifying that restartCount is present Mar 19 21:38:06.088: INFO: Initial restart count of pod busybox-03eb43f4-cab8-4179-b618-1eab886918f7 is 0 Mar 19 21:38:54.208: INFO: Restart count of pod container-probe-1537/busybox-03eb43f4-cab8-4179-b618-1eab886918f7 is now 1 (48.120058005s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:38:54.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1537" for this suite. • [SLOW TEST:52.295 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1685,"failed":0} SSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:38:54.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-9286/configmap-test-d7c85391-5def-4b77-901c-7aa810ad5c2a STEP: Creating a pod to test consume configMaps Mar 19 21:38:54.327: INFO: Waiting up to 5m0s for pod "pod-configmaps-b14e270d-0e62-43bc-b458-ea1da39eaa17" in namespace "configmap-9286" to be "success or failure" Mar 19 21:38:54.330: INFO: Pod "pod-configmaps-b14e270d-0e62-43bc-b458-ea1da39eaa17": Phase="Pending", Reason="", readiness=false. Elapsed: 3.414381ms Mar 19 21:38:56.334: INFO: Pod "pod-configmaps-b14e270d-0e62-43bc-b458-ea1da39eaa17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007236101s Mar 19 21:38:58.339: INFO: Pod "pod-configmaps-b14e270d-0e62-43bc-b458-ea1da39eaa17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01277339s STEP: Saw pod success Mar 19 21:38:58.339: INFO: Pod "pod-configmaps-b14e270d-0e62-43bc-b458-ea1da39eaa17" satisfied condition "success or failure" Mar 19 21:38:58.342: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-b14e270d-0e62-43bc-b458-ea1da39eaa17 container env-test: STEP: delete the pod Mar 19 21:38:58.401: INFO: Waiting for pod pod-configmaps-b14e270d-0e62-43bc-b458-ea1da39eaa17 to disappear Mar 19 21:38:58.419: INFO: Pod pod-configmaps-b14e270d-0e62-43bc-b458-ea1da39eaa17 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:38:58.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9286" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1694,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:38:58.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Mar 19 21:38:58.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 19 21:38:58.578: INFO: stderr: "" Mar 19 21:38:58.578: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:38:58.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5555" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":108,"skipped":1695,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:38:58.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1632 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 19 21:38:58.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-3018' Mar 19 21:38:58.732: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 19 21:38:58.732: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Mar 19 21:38:58.819: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-s67nf] Mar 19 21:38:58.820: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-s67nf" in namespace "kubectl-3018" to be "running and ready" Mar 19 21:38:58.831: INFO: Pod "e2e-test-httpd-rc-s67nf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.610593ms Mar 19 21:39:00.834: INFO: Pod "e2e-test-httpd-rc-s67nf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01467456s Mar 19 21:39:02.839: INFO: Pod "e2e-test-httpd-rc-s67nf": Phase="Running", Reason="", readiness=true. Elapsed: 4.018985449s Mar 19 21:39:02.839: INFO: Pod "e2e-test-httpd-rc-s67nf" satisfied condition "running and ready" Mar 19 21:39:02.839: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-s67nf] Mar 19 21:39:02.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-3018' Mar 19 21:39:02.969: INFO: stderr: "" Mar 19 21:39:02.969: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.246. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.246. Set the 'ServerName' directive globally to suppress this message\n[Thu Mar 19 21:39:01.065658 2020] [mpm_event:notice] [pid 1:tid 140432442973032] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu Mar 19 21:39:01.065710 2020] [core:notice] [pid 1:tid 140432442973032] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1637 Mar 19 21:39:02.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-3018' Mar 19 21:39:03.062: INFO: stderr: "" Mar 19 21:39:03.062: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:39:03.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3018" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":109,"skipped":1701,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:39:03.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 19 21:39:03.131: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:39:18.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4980" for this suite. • [SLOW TEST:15.109 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":110,"skipped":1728,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:39:18.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:39:18.261: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 19 21:39:20.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4349 create -f -' Mar 19 21:39:23.399: INFO: stderr: "" Mar 19 21:39:23.399: INFO: stdout: "e2e-test-crd-publish-openapi-140-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 19 21:39:23.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4349 delete e2e-test-crd-publish-openapi-140-crds test-cr' Mar 19 21:39:23.520: INFO: stderr: "" Mar 19 21:39:23.520: INFO: stdout: "e2e-test-crd-publish-openapi-140-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 19 21:39:23.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4349 apply -f -' Mar 19 21:39:23.768: INFO: stderr: "" Mar 19 21:39:23.768: INFO: stdout: "e2e-test-crd-publish-openapi-140-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 19 21:39:23.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4349 delete e2e-test-crd-publish-openapi-140-crds test-cr' Mar 19 21:39:23.876: INFO: stderr: "" Mar 19 21:39:23.876: INFO: stdout: "e2e-test-crd-publish-openapi-140-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 19 21:39:23.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-140-crds' Mar 19 21:39:24.103: INFO: stderr: "" Mar 19 21:39:24.103: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-140-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:39:26.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4349" for this suite. • [SLOW TEST:8.814 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":111,"skipped":1729,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:39:26.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-5539 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5539 to expose endpoints map[] Mar 19 21:39:27.107: INFO: Get endpoints failed (17.16881ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 19 21:39:28.110: INFO: successfully validated that service multi-endpoint-test in namespace services-5539 exposes endpoints map[] (1.020966759s elapsed) STEP: Creating pod pod1 in namespace services-5539 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5539 to expose endpoints map[pod1:[100]] Mar 19 21:39:32.154: INFO: successfully validated that service multi-endpoint-test in namespace services-5539 exposes endpoints map[pod1:[100]] (4.036788452s elapsed) STEP: Creating pod pod2 in namespace services-5539 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5539 to expose endpoints map[pod1:[100] pod2:[101]] Mar 19 21:39:35.228: INFO: successfully validated that service multi-endpoint-test in namespace services-5539 exposes endpoints map[pod1:[100] pod2:[101]] (3.071160919s elapsed) STEP: Deleting pod pod1 in namespace services-5539 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5539 to expose endpoints map[pod2:[101]] Mar 19 21:39:36.255: INFO: successfully validated that service multi-endpoint-test in namespace services-5539 exposes endpoints map[pod2:[101]] (1.023086472s elapsed) STEP: Deleting pod pod2 in namespace services-5539 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5539 to expose endpoints map[] Mar 19 21:39:37.269: INFO: successfully validated that service multi-endpoint-test in namespace services-5539 exposes endpoints map[] (1.008358573s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:39:37.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5539" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.383 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":112,"skipped":1739,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:39:37.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5903 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 19 21:39:37.488: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 19 21:40:03.651: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.248:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5903 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 21:40:03.651: INFO: >>> kubeConfig: /root/.kube/config I0319 21:40:03.687039 6 log.go:172] (0xc001e70210) (0xc0012f0b40) Create stream I0319 21:40:03.687070 6 log.go:172] (0xc001e70210) (0xc0012f0b40) Stream added, broadcasting: 1 I0319 21:40:03.688980 6 log.go:172] (0xc001e70210) Reply frame received for 1 I0319 21:40:03.689015 6 log.go:172] (0xc001e70210) (0xc002a100a0) Create stream I0319 21:40:03.689028 6 log.go:172] (0xc001e70210) (0xc002a100a0) Stream added, broadcasting: 3 I0319 21:40:03.690090 6 log.go:172] (0xc001e70210) Reply frame received for 3 I0319 21:40:03.690139 6 log.go:172] (0xc001e70210) (0xc0012f0e60) Create stream I0319 21:40:03.690161 6 log.go:172] (0xc001e70210) (0xc0012f0e60) Stream added, broadcasting: 5 I0319 21:40:03.690998 6 log.go:172] (0xc001e70210) Reply frame received for 5 I0319 21:40:03.778158 6 log.go:172] (0xc001e70210) Data frame received for 3 I0319 21:40:03.778196 6 log.go:172] (0xc002a100a0) (3) Data frame handling I0319 21:40:03.778220 6 log.go:172] (0xc002a100a0) (3) Data frame sent I0319 21:40:03.778492 6 log.go:172] (0xc001e70210) Data frame received for 5 I0319 21:40:03.778510 6 log.go:172] (0xc0012f0e60) (5) Data frame handling I0319 21:40:03.778539 6 log.go:172] (0xc001e70210) Data frame received for 3 I0319 21:40:03.778567 6 log.go:172] (0xc002a100a0) (3) Data frame handling I0319 21:40:03.781259 6 log.go:172] (0xc001e70210) Data frame received for 1 I0319 21:40:03.781284 6 log.go:172] (0xc0012f0b40) (1) Data frame handling I0319 21:40:03.781295 6 log.go:172] (0xc0012f0b40) (1) Data frame sent I0319 21:40:03.781307 6 log.go:172] (0xc001e70210) (0xc0012f0b40) Stream removed, broadcasting: 1 I0319 21:40:03.781400 6 log.go:172] (0xc001e70210) (0xc0012f0b40) Stream removed, broadcasting: 1 I0319 21:40:03.781414 6 log.go:172] (0xc001e70210) (0xc002a100a0) Stream removed, broadcasting: 3 I0319 21:40:03.781575 6 log.go:172] (0xc001e70210) (0xc0012f0e60) Stream removed, broadcasting: 5 Mar 19 21:40:03.781: INFO: Found all expected endpoints: [netserver-0] I0319 21:40:03.781673 6 log.go:172] (0xc001e70210) Go away received Mar 19 21:40:03.784: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.27:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5903 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 21:40:03.784: INFO: >>> kubeConfig: /root/.kube/config I0319 21:40:03.807261 6 log.go:172] (0xc005230580) (0xc0023e7b80) Create stream I0319 21:40:03.807298 6 log.go:172] (0xc005230580) (0xc0023e7b80) Stream added, broadcasting: 1 I0319 21:40:03.808956 6 log.go:172] (0xc005230580) Reply frame received for 1 I0319 21:40:03.808985 6 log.go:172] (0xc005230580) (0xc002a10320) Create stream I0319 21:40:03.808994 6 log.go:172] (0xc005230580) (0xc002a10320) Stream added, broadcasting: 3 I0319 21:40:03.809919 6 log.go:172] (0xc005230580) Reply frame received for 3 I0319 21:40:03.809948 6 log.go:172] (0xc005230580) (0xc0011ab9a0) Create stream I0319 21:40:03.809957 6 log.go:172] (0xc005230580) (0xc0011ab9a0) Stream added, broadcasting: 5 I0319 21:40:03.810808 6 log.go:172] (0xc005230580) Reply frame received for 5 I0319 21:40:03.882687 6 log.go:172] (0xc005230580) Data frame received for 3 I0319 21:40:03.882723 6 log.go:172] (0xc002a10320) (3) Data frame handling I0319 21:40:03.882735 6 log.go:172] (0xc002a10320) (3) Data frame sent I0319 21:40:03.882745 6 log.go:172] (0xc005230580) Data frame received for 3 I0319 21:40:03.882753 6 log.go:172] (0xc002a10320) (3) Data frame handling I0319 21:40:03.882771 6 log.go:172] (0xc005230580) Data frame received for 5 I0319 21:40:03.882780 6 log.go:172] (0xc0011ab9a0) (5) Data frame handling I0319 21:40:03.884543 6 log.go:172] (0xc005230580) Data frame received for 1 I0319 21:40:03.884576 6 log.go:172] (0xc0023e7b80) (1) Data frame handling I0319 21:40:03.884603 6 log.go:172] (0xc0023e7b80) (1) Data frame sent I0319 21:40:03.884621 6 log.go:172] (0xc005230580) (0xc0023e7b80) Stream removed, broadcasting: 1 I0319 21:40:03.884640 6 log.go:172] (0xc005230580) Go away received I0319 21:40:03.884849 6 log.go:172] (0xc005230580) (0xc0023e7b80) Stream removed, broadcasting: 1 I0319 21:40:03.884876 6 log.go:172] (0xc005230580) (0xc002a10320) Stream removed, broadcasting: 3 I0319 21:40:03.884888 6 log.go:172] (0xc005230580) (0xc0011ab9a0) Stream removed, broadcasting: 5 Mar 19 21:40:03.884: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:40:03.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5903" for this suite. • [SLOW TEST:26.696 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1753,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:40:04.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Mar 19 21:40:04.212: INFO: Waiting up to 5m0s for pod "pod-24e2eeb4-81d4-4a6e-a7a6-4e1de9f553ed" in namespace "emptydir-6254" to be "success or failure" Mar 19 21:40:04.224: INFO: Pod "pod-24e2eeb4-81d4-4a6e-a7a6-4e1de9f553ed": Phase="Pending", Reason="", readiness=false. Elapsed: 12.07918ms Mar 19 21:40:06.241: INFO: Pod "pod-24e2eeb4-81d4-4a6e-a7a6-4e1de9f553ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029008588s Mar 19 21:40:08.245: INFO: Pod "pod-24e2eeb4-81d4-4a6e-a7a6-4e1de9f553ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032753779s STEP: Saw pod success Mar 19 21:40:08.245: INFO: Pod "pod-24e2eeb4-81d4-4a6e-a7a6-4e1de9f553ed" satisfied condition "success or failure" Mar 19 21:40:08.248: INFO: Trying to get logs from node jerma-worker2 pod pod-24e2eeb4-81d4-4a6e-a7a6-4e1de9f553ed container test-container: STEP: delete the pod Mar 19 21:40:08.319: INFO: Waiting for pod pod-24e2eeb4-81d4-4a6e-a7a6-4e1de9f553ed to disappear Mar 19 21:40:08.332: INFO: Pod pod-24e2eeb4-81d4-4a6e-a7a6-4e1de9f553ed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:40:08.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6254" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1758,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:40:08.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 19 21:40:08.439: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:40:08.443: INFO: Number of nodes with available pods: 0 Mar 19 21:40:08.443: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:40:09.602: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:40:09.624: INFO: Number of nodes with available pods: 0 Mar 19 21:40:09.624: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:40:10.511: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:40:10.515: INFO: Number of nodes with available pods: 0 Mar 19 21:40:10.515: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:40:11.448: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:40:11.450: INFO: Number of nodes with available pods: 0 Mar 19 21:40:11.450: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:40:12.454: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:40:12.457: INFO: Number of nodes with available pods: 0 Mar 19 21:40:12.457: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:40:13.447: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:40:13.450: INFO: Number of nodes with available pods: 2 Mar 19 21:40:13.450: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 19 21:40:13.504: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:40:13.507: INFO: Number of nodes with available pods: 1 Mar 19 21:40:13.507: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:40:14.523: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:40:14.527: INFO: Number of nodes with available pods: 1 Mar 19 21:40:14.527: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:40:15.511: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:40:15.515: INFO: Number of nodes with available pods: 1 Mar 19 21:40:15.515: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:40:16.515: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:40:16.518: INFO: Number of nodes with available pods: 1 Mar 19 21:40:16.518: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:40:17.512: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:40:17.515: INFO: Number of nodes with available pods: 1 Mar 19 21:40:17.515: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:40:18.519: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:40:18.522: INFO: Number of nodes with available pods: 1 Mar 19 21:40:18.522: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:40:19.512: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:40:19.516: INFO: Number of nodes with available pods: 1 Mar 19 21:40:19.516: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:40:20.511: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:40:20.515: INFO: Number of nodes with available pods: 1 Mar 19 21:40:20.515: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:40:21.512: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:40:21.515: INFO: Number of nodes with available pods: 1 Mar 19 21:40:21.515: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:40:22.512: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:40:22.515: INFO: Number of nodes with available pods: 2 Mar 19 21:40:22.515: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2369, will wait for the garbage collector to delete the pods Mar 19 21:40:22.596: INFO: Deleting DaemonSet.extensions daemon-set took: 24.125573ms Mar 19 21:40:22.896: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.329223ms Mar 19 21:40:29.499: INFO: Number of nodes with available pods: 0 Mar 19 21:40:29.499: INFO: Number of running nodes: 0, number of available pods: 0 Mar 19 21:40:29.502: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2369/daemonsets","resourceVersion":"1120651"},"items":null} Mar 19 21:40:29.505: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2369/pods","resourceVersion":"1120651"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:40:29.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2369" for this suite. • [SLOW TEST:21.182 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":115,"skipped":1760,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:40:29.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0319 21:40:39.654074 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 19 21:40:39.654: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:40:39.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6450" for this suite. • [SLOW TEST:10.138 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":116,"skipped":1769,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:40:39.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ad54df38-8447-4945-9a8f-92c6a022e7de STEP: Creating a pod to test consume secrets Mar 19 21:40:39.782: INFO: Waiting up to 5m0s for pod "pod-secrets-efa359fa-9eac-4abe-9d5e-ce644c307bfd" in namespace "secrets-1575" to be "success or failure" Mar 19 21:40:39.786: INFO: Pod "pod-secrets-efa359fa-9eac-4abe-9d5e-ce644c307bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.947578ms Mar 19 21:40:41.790: INFO: Pod "pod-secrets-efa359fa-9eac-4abe-9d5e-ce644c307bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008216996s Mar 19 21:40:43.795: INFO: Pod "pod-secrets-efa359fa-9eac-4abe-9d5e-ce644c307bfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012621692s STEP: Saw pod success Mar 19 21:40:43.795: INFO: Pod "pod-secrets-efa359fa-9eac-4abe-9d5e-ce644c307bfd" satisfied condition "success or failure" Mar 19 21:40:43.797: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-efa359fa-9eac-4abe-9d5e-ce644c307bfd container secret-volume-test: STEP: delete the pod Mar 19 21:40:43.817: INFO: Waiting for pod pod-secrets-efa359fa-9eac-4abe-9d5e-ce644c307bfd to disappear Mar 19 21:40:43.870: INFO: Pod pod-secrets-efa359fa-9eac-4abe-9d5e-ce644c307bfd no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:40:43.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1575" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1774,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:40:43.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 19 21:40:43.949: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 19 21:40:44.012: INFO: Waiting for terminating namespaces to be deleted... Mar 19 21:40:44.016: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 19 21:40:44.033: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 21:40:44.033: INFO: Container kindnet-cni ready: true, restart count 0 Mar 19 21:40:44.033: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 21:40:44.033: INFO: Container kube-proxy ready: true, restart count 0 Mar 19 21:40:44.033: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 19 21:40:44.039: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 21:40:44.039: INFO: Container kindnet-cni ready: true, restart count 0 Mar 19 21:40:44.039: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 21:40:44.039: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a04272ec-527e-4a27-baa3-c8dc04ba2c39 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-a04272ec-527e-4a27-baa3-c8dc04ba2c39 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-a04272ec-527e-4a27-baa3-c8dc04ba2c39 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:40:52.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5092" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:9.074 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":118,"skipped":1793,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:40:52.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1861 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 19 21:40:54.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-94' Mar 19 21:40:54.362: INFO: stderr: "" Mar 19 21:40:54.362: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1866 Mar 19 21:40:54.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-94' Mar 19 21:41:09.263: INFO: stderr: "" Mar 19 21:41:09.263: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:41:09.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-94" for this suite. • [SLOW TEST:16.337 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1857 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":119,"skipped":1817,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:41:09.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 19 21:41:17.417: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 19 21:41:17.424: INFO: Pod pod-with-poststart-http-hook still exists Mar 19 21:41:19.424: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 19 21:41:19.428: INFO: Pod pod-with-poststart-http-hook still exists Mar 19 21:41:21.424: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 19 21:41:21.427: INFO: Pod pod-with-poststart-http-hook still exists Mar 19 21:41:23.424: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 19 21:41:23.430: INFO: Pod pod-with-poststart-http-hook still exists Mar 19 21:41:25.424: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 19 21:41:25.440: INFO: Pod pod-with-poststart-http-hook still exists Mar 19 21:41:27.424: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 19 21:41:27.446: INFO: Pod pod-with-poststart-http-hook still exists Mar 19 21:41:29.424: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 19 21:41:29.428: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:41:29.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1393" for this suite. • [SLOW TEST:20.146 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1823,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:41:29.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:41:40.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3132" for this suite. • [SLOW TEST:11.152 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":121,"skipped":1825,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:41:40.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:41:40.702: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 19 21:41:42.731: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:41:43.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3930" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":122,"skipped":1866,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:41:43.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 19 21:41:44.137: INFO: >>> kubeConfig: /root/.kube/config Mar 19 21:41:46.055: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:41:57.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7725" for this suite. • [SLOW TEST:13.730 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":123,"skipped":1879,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:41:57.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:41:57.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-956" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":1882,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:41:57.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-xjdv STEP: Creating a pod to test atomic-volume-subpath Mar 19 21:41:57.747: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-xjdv" in namespace "subpath-7152" to be "success or failure" Mar 19 21:41:57.751: INFO: Pod "pod-subpath-test-secret-xjdv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.646677ms Mar 19 21:41:59.755: INFO: Pod "pod-subpath-test-secret-xjdv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008709055s Mar 19 21:42:01.760: INFO: Pod "pod-subpath-test-secret-xjdv": Phase="Running", Reason="", readiness=true. Elapsed: 4.012803208s Mar 19 21:42:03.764: INFO: Pod "pod-subpath-test-secret-xjdv": Phase="Running", Reason="", readiness=true. Elapsed: 6.016872096s Mar 19 21:42:05.767: INFO: Pod "pod-subpath-test-secret-xjdv": Phase="Running", Reason="", readiness=true. Elapsed: 8.020696749s Mar 19 21:42:07.772: INFO: Pod "pod-subpath-test-secret-xjdv": Phase="Running", Reason="", readiness=true. Elapsed: 10.024831786s Mar 19 21:42:09.775: INFO: Pod "pod-subpath-test-secret-xjdv": Phase="Running", Reason="", readiness=true. Elapsed: 12.02817084s Mar 19 21:42:11.780: INFO: Pod "pod-subpath-test-secret-xjdv": Phase="Running", Reason="", readiness=true. Elapsed: 14.032803632s Mar 19 21:42:13.784: INFO: Pod "pod-subpath-test-secret-xjdv": Phase="Running", Reason="", readiness=true. Elapsed: 16.037701094s Mar 19 21:42:15.788: INFO: Pod "pod-subpath-test-secret-xjdv": Phase="Running", Reason="", readiness=true. Elapsed: 18.041664492s Mar 19 21:42:17.793: INFO: Pod "pod-subpath-test-secret-xjdv": Phase="Running", Reason="", readiness=true. Elapsed: 20.045782926s Mar 19 21:42:19.797: INFO: Pod "pod-subpath-test-secret-xjdv": Phase="Running", Reason="", readiness=true. Elapsed: 22.050191206s Mar 19 21:42:21.801: INFO: Pod "pod-subpath-test-secret-xjdv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.0545109s STEP: Saw pod success Mar 19 21:42:21.801: INFO: Pod "pod-subpath-test-secret-xjdv" satisfied condition "success or failure" Mar 19 21:42:21.804: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-xjdv container test-container-subpath-secret-xjdv: STEP: delete the pod Mar 19 21:42:21.840: INFO: Waiting for pod pod-subpath-test-secret-xjdv to disappear Mar 19 21:42:21.863: INFO: Pod pod-subpath-test-secret-xjdv no longer exists STEP: Deleting pod pod-subpath-test-secret-xjdv Mar 19 21:42:21.863: INFO: Deleting pod "pod-subpath-test-secret-xjdv" in namespace "subpath-7152" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:42:21.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7152" for this suite. • [SLOW TEST:24.242 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":125,"skipped":1888,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:42:21.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0319 21:42:23.047154 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 19 21:42:23.047: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:42:23.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8693" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":126,"skipped":1923,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:42:23.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 19 21:42:23.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9407' Mar 19 21:42:23.450: INFO: stderr: "" Mar 19 21:42:23.450: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 19 21:42:24.454: INFO: Selector matched 1 pods for map[app:agnhost] Mar 19 21:42:24.454: INFO: Found 0 / 1 Mar 19 21:42:25.454: INFO: Selector matched 1 pods for map[app:agnhost] Mar 19 21:42:25.454: INFO: Found 0 / 1 Mar 19 21:42:26.454: INFO: Selector matched 1 pods for map[app:agnhost] Mar 19 21:42:26.454: INFO: Found 1 / 1 Mar 19 21:42:26.454: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 19 21:42:26.457: INFO: Selector matched 1 pods for map[app:agnhost] Mar 19 21:42:26.457: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 19 21:42:26.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-nww8z --namespace=kubectl-9407 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 19 21:42:26.550: INFO: stderr: "" Mar 19 21:42:26.550: INFO: stdout: "pod/agnhost-master-nww8z patched\n" STEP: checking annotations Mar 19 21:42:26.560: INFO: Selector matched 1 pods for map[app:agnhost] Mar 19 21:42:26.560: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:42:26.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9407" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":127,"skipped":1950,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:42:26.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-1653a642-bc74-4168-a3a9-b29995d2e02b in namespace container-probe-5610 Mar 19 21:42:30.672: INFO: Started pod test-webserver-1653a642-bc74-4168-a3a9-b29995d2e02b in namespace container-probe-5610 STEP: checking the pod's current state and verifying that restartCount is present Mar 19 21:42:30.675: INFO: Initial restart count of pod test-webserver-1653a642-bc74-4168-a3a9-b29995d2e02b is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:46:31.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5610" for this suite. • [SLOW TEST:244.829 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":1960,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:46:31.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Mar 19 21:46:31.452: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:46:31.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-480" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":129,"skipped":1987,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:46:31.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:46:31.682: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 19 21:46:33.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-939 create -f -' Mar 19 21:46:37.176: INFO: stderr: "" Mar 19 21:46:37.176: INFO: stdout: "e2e-test-crd-publish-openapi-6654-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 19 21:46:37.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-939 delete e2e-test-crd-publish-openapi-6654-crds test-cr' Mar 19 21:46:37.280: INFO: stderr: "" Mar 19 21:46:37.280: INFO: stdout: "e2e-test-crd-publish-openapi-6654-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 19 21:46:37.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-939 apply -f -' Mar 19 21:46:37.521: INFO: stderr: "" Mar 19 21:46:37.521: INFO: stdout: "e2e-test-crd-publish-openapi-6654-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 19 21:46:37.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-939 delete e2e-test-crd-publish-openapi-6654-crds test-cr' Mar 19 21:46:37.616: INFO: stderr: "" Mar 19 21:46:37.616: INFO: stdout: "e2e-test-crd-publish-openapi-6654-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 19 21:46:37.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6654-crds' Mar 19 21:46:37.838: INFO: stderr: "" Mar 19 21:46:37.838: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6654-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:46:40.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-939" for this suite. • [SLOW TEST:9.159 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":130,"skipped":2016,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:46:40.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 19 21:46:40.764: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3a14544-699e-4f72-bdc2-fedd9265504b" in namespace "downward-api-7506" to be "success or failure" Mar 19 21:46:40.767: INFO: Pod "downwardapi-volume-c3a14544-699e-4f72-bdc2-fedd9265504b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.14809ms Mar 19 21:46:42.771: INFO: Pod "downwardapi-volume-c3a14544-699e-4f72-bdc2-fedd9265504b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006767573s Mar 19 21:46:44.775: INFO: Pod "downwardapi-volume-c3a14544-699e-4f72-bdc2-fedd9265504b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010641782s STEP: Saw pod success Mar 19 21:46:44.775: INFO: Pod "downwardapi-volume-c3a14544-699e-4f72-bdc2-fedd9265504b" satisfied condition "success or failure" Mar 19 21:46:44.777: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c3a14544-699e-4f72-bdc2-fedd9265504b container client-container: STEP: delete the pod Mar 19 21:46:44.827: INFO: Waiting for pod downwardapi-volume-c3a14544-699e-4f72-bdc2-fedd9265504b to disappear Mar 19 21:46:44.839: INFO: Pod downwardapi-volume-c3a14544-699e-4f72-bdc2-fedd9265504b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:46:44.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7506" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2029,"failed":0} ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:46:44.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-4626/configmap-test-87a03a51-5996-4b0b-9416-9419fcc92a88 STEP: Creating a pod to test consume configMaps Mar 19 21:46:44.915: INFO: Waiting up to 5m0s for pod "pod-configmaps-0fefb814-8549-4c11-b160-57369fd85b5e" in namespace "configmap-4626" to be "success or failure" Mar 19 21:46:44.929: INFO: Pod "pod-configmaps-0fefb814-8549-4c11-b160-57369fd85b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.956097ms Mar 19 21:46:46.934: INFO: Pod "pod-configmaps-0fefb814-8549-4c11-b160-57369fd85b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0183004s Mar 19 21:46:48.938: INFO: Pod "pod-configmaps-0fefb814-8549-4c11-b160-57369fd85b5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022551614s STEP: Saw pod success Mar 19 21:46:48.938: INFO: Pod "pod-configmaps-0fefb814-8549-4c11-b160-57369fd85b5e" satisfied condition "success or failure" Mar 19 21:46:48.941: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-0fefb814-8549-4c11-b160-57369fd85b5e container env-test: STEP: delete the pod Mar 19 21:46:48.972: INFO: Waiting for pod pod-configmaps-0fefb814-8549-4c11-b160-57369fd85b5e to disappear Mar 19 21:46:48.977: INFO: Pod pod-configmaps-0fefb814-8549-4c11-b160-57369fd85b5e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:46:48.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4626" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2029,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:46:48.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 19 21:46:49.510: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 19 21:46:51.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251209, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251209, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251209, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251209, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 21:46:54.551: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:46:55.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9609" for this suite. STEP: Destroying namespace "webhook-9609-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.156 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":133,"skipped":2049,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:46:55.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:46:55.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6731" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":134,"skipped":2070,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:46:55.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1596 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 19 21:46:55.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2773' Mar 19 21:46:55.391: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 19 21:46:55.391: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1602 Mar 19 21:46:55.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2773' Mar 19 21:46:55.536: INFO: stderr: "" Mar 19 21:46:55.536: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:46:55.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2773" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":135,"skipped":2091,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:46:55.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 19 21:46:55.875: INFO: >>> kubeConfig: /root/.kube/config Mar 19 21:46:58.897: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:47:09.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6378" for this suite. • [SLOW TEST:13.711 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":136,"skipped":2095,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:47:09.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-mqq9 STEP: Creating a pod to test atomic-volume-subpath Mar 19 21:47:09.357: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mqq9" in namespace "subpath-1802" to be "success or failure" Mar 19 21:47:09.367: INFO: Pod "pod-subpath-test-configmap-mqq9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.193834ms Mar 19 21:47:11.373: INFO: Pod "pod-subpath-test-configmap-mqq9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015409231s Mar 19 21:47:13.376: INFO: Pod "pod-subpath-test-configmap-mqq9": Phase="Running", Reason="", readiness=true. Elapsed: 4.018918323s Mar 19 21:47:15.380: INFO: Pod "pod-subpath-test-configmap-mqq9": Phase="Running", Reason="", readiness=true. Elapsed: 6.022841975s Mar 19 21:47:17.384: INFO: Pod "pod-subpath-test-configmap-mqq9": Phase="Running", Reason="", readiness=true. Elapsed: 8.026763617s Mar 19 21:47:19.391: INFO: Pod "pod-subpath-test-configmap-mqq9": Phase="Running", Reason="", readiness=true. Elapsed: 10.034181371s Mar 19 21:47:21.395: INFO: Pod "pod-subpath-test-configmap-mqq9": Phase="Running", Reason="", readiness=true. Elapsed: 12.038187503s Mar 19 21:47:23.399: INFO: Pod "pod-subpath-test-configmap-mqq9": Phase="Running", Reason="", readiness=true. Elapsed: 14.041999088s Mar 19 21:47:25.421: INFO: Pod "pod-subpath-test-configmap-mqq9": Phase="Running", Reason="", readiness=true. Elapsed: 16.063509757s Mar 19 21:47:27.424: INFO: Pod "pod-subpath-test-configmap-mqq9": Phase="Running", Reason="", readiness=true. Elapsed: 18.067347843s Mar 19 21:47:29.427: INFO: Pod "pod-subpath-test-configmap-mqq9": Phase="Running", Reason="", readiness=true. Elapsed: 20.070165998s Mar 19 21:47:31.432: INFO: Pod "pod-subpath-test-configmap-mqq9": Phase="Running", Reason="", readiness=true. Elapsed: 22.075041901s Mar 19 21:47:33.436: INFO: Pod "pod-subpath-test-configmap-mqq9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.078954277s STEP: Saw pod success Mar 19 21:47:33.436: INFO: Pod "pod-subpath-test-configmap-mqq9" satisfied condition "success or failure" Mar 19 21:47:33.439: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-mqq9 container test-container-subpath-configmap-mqq9: STEP: delete the pod Mar 19 21:47:33.499: INFO: Waiting for pod pod-subpath-test-configmap-mqq9 to disappear Mar 19 21:47:33.504: INFO: Pod pod-subpath-test-configmap-mqq9 no longer exists STEP: Deleting pod pod-subpath-test-configmap-mqq9 Mar 19 21:47:33.504: INFO: Deleting pod "pod-subpath-test-configmap-mqq9" in namespace "subpath-1802" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:47:33.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1802" for this suite. • [SLOW TEST:24.230 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":137,"skipped":2097,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:47:33.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:47:49.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2480" for this suite. • [SLOW TEST:16.138 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":138,"skipped":2116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:47:49.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 19 21:47:49.702: INFO: namespace kubectl-5356 Mar 19 21:47:49.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5356' Mar 19 21:47:50.007: INFO: stderr: "" Mar 19 21:47:50.007: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 19 21:47:51.011: INFO: Selector matched 1 pods for map[app:agnhost] Mar 19 21:47:51.011: INFO: Found 0 / 1 Mar 19 21:47:52.011: INFO: Selector matched 1 pods for map[app:agnhost] Mar 19 21:47:52.011: INFO: Found 0 / 1 Mar 19 21:47:53.012: INFO: Selector matched 1 pods for map[app:agnhost] Mar 19 21:47:53.012: INFO: Found 1 / 1 Mar 19 21:47:53.012: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 19 21:47:53.015: INFO: Selector matched 1 pods for map[app:agnhost] Mar 19 21:47:53.015: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 19 21:47:53.015: INFO: wait on agnhost-master startup in kubectl-5356 Mar 19 21:47:53.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-gb5pc agnhost-master --namespace=kubectl-5356' Mar 19 21:47:53.133: INFO: stderr: "" Mar 19 21:47:53.133: INFO: stdout: "Paused\n" STEP: exposing RC Mar 19 21:47:53.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5356' Mar 19 21:47:53.655: INFO: stderr: "" Mar 19 21:47:53.655: INFO: stdout: "service/rm2 exposed\n" Mar 19 21:47:53.895: INFO: Service rm2 in namespace kubectl-5356 found. STEP: exposing service Mar 19 21:47:55.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5356' Mar 19 21:47:56.060: INFO: stderr: "" Mar 19 21:47:56.060: INFO: stdout: "service/rm3 exposed\n" Mar 19 21:47:56.069: INFO: Service rm3 in namespace kubectl-5356 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:47:58.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5356" for this suite. • [SLOW TEST:8.428 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1295 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":139,"skipped":2139,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:47:58.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:47:58.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7970' Mar 19 21:47:58.368: INFO: stderr: "" Mar 19 21:47:58.368: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 19 21:47:58.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7970' Mar 19 21:47:58.675: INFO: stderr: "" Mar 19 21:47:58.675: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 19 21:47:59.732: INFO: Selector matched 1 pods for map[app:agnhost] Mar 19 21:47:59.732: INFO: Found 0 / 1 Mar 19 21:48:00.680: INFO: Selector matched 1 pods for map[app:agnhost] Mar 19 21:48:00.680: INFO: Found 1 / 1 Mar 19 21:48:00.680: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 19 21:48:00.683: INFO: Selector matched 1 pods for map[app:agnhost] Mar 19 21:48:00.683: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 19 21:48:00.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-z5dx7 --namespace=kubectl-7970' Mar 19 21:48:00.812: INFO: stderr: "" Mar 19 21:48:00.812: INFO: stdout: "Name: agnhost-master-z5dx7\nNamespace: kubectl-7970\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Thu, 19 Mar 2020 21:47:58 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.43\nIPs:\n IP: 10.244.2.43\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://eb7cc4f7a96c9d8f82a9e36961cea807ed7626e0ce2004b70bb8ffb2a0edc340\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 19 Mar 2020 21:48:00 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-mdd5q (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-mdd5q:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-mdd5q\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-7970/agnhost-master-z5dx7 to jerma-worker2\n Normal Pulled 1s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 0s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 0s kubelet, jerma-worker2 Started container agnhost-master\n" Mar 19 21:48:00.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-7970' Mar 19 21:48:00.978: INFO: stderr: "" Mar 19 21:48:00.978: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7970\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: agnhost-master-z5dx7\n" Mar 19 21:48:00.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-7970' Mar 19 21:48:01.076: INFO: stderr: "" Mar 19 21:48:01.076: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7970\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.56.41\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.43:6379\nSession Affinity: None\nEvents: \n" Mar 19 21:48:01.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Mar 19 21:48:01.202: INFO: stderr: "" Mar 19 21:48:01.202: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Thu, 19 Mar 2020 21:47:57 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 19 Mar 2020 21:44:07 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 19 Mar 2020 21:44:07 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 19 Mar 2020 21:44:07 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 19 Mar 2020 21:44:07 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4d3h\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4d3h\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d3h\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 4d3h\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 4d3h\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 4d3h\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d3h\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 4d3h\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d3h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 19 21:48:01.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7970' Mar 19 21:48:01.305: INFO: stderr: "" Mar 19 21:48:01.305: INFO: stdout: "Name: kubectl-7970\nLabels: e2e-framework=kubectl\n e2e-run=4bf8de45-bc7e-47e5-b1ad-d03fb933c17a\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:48:01.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7970" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":140,"skipped":2144,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:48:01.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1674.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1674.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1674.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1674.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1674.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1674.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 19 21:48:07.466: INFO: DNS probes using dns-1674/dns-test-da2d790f-0bb1-4097-8e7b-b8d24d9e177d succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:48:07.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1674" for this suite. • [SLOW TEST:6.270 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":141,"skipped":2163,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:48:07.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 19 21:48:15.879: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 19 21:48:15.884: INFO: Pod pod-with-prestop-exec-hook still exists Mar 19 21:48:17.885: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 19 21:48:17.890: INFO: Pod pod-with-prestop-exec-hook still exists Mar 19 21:48:19.885: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 19 21:48:19.889: INFO: Pod pod-with-prestop-exec-hook still exists Mar 19 21:48:21.885: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 19 21:48:21.889: INFO: Pod pod-with-prestop-exec-hook still exists Mar 19 21:48:23.885: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 19 21:48:23.889: INFO: Pod pod-with-prestop-exec-hook still exists Mar 19 21:48:25.885: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 19 21:48:25.889: INFO: Pod pod-with-prestop-exec-hook still exists Mar 19 21:48:27.885: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 19 21:48:27.889: INFO: Pod pod-with-prestop-exec-hook still exists Mar 19 21:48:29.885: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 19 21:48:29.889: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:48:29.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6972" for this suite. • [SLOW TEST:22.320 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:48:29.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1788 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 19 21:48:29.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-486' Mar 19 21:48:30.074: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 19 21:48:30.074: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1793 Mar 19 21:48:30.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-486' Mar 19 21:48:30.223: INFO: stderr: "" Mar 19 21:48:30.223: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:48:30.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-486" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":143,"skipped":2230,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:48:30.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Mar 19 21:48:30.316: INFO: Waiting up to 5m0s for pod "var-expansion-db666624-31ee-42a6-917b-40a26f580b15" in namespace "var-expansion-5134" to be "success or failure" Mar 19 21:48:30.322: INFO: Pod "var-expansion-db666624-31ee-42a6-917b-40a26f580b15": Phase="Pending", Reason="", readiness=false. Elapsed: 5.672489ms Mar 19 21:48:32.326: INFO: Pod "var-expansion-db666624-31ee-42a6-917b-40a26f580b15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009935755s Mar 19 21:48:34.330: INFO: Pod "var-expansion-db666624-31ee-42a6-917b-40a26f580b15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014020377s STEP: Saw pod success Mar 19 21:48:34.330: INFO: Pod "var-expansion-db666624-31ee-42a6-917b-40a26f580b15" satisfied condition "success or failure" Mar 19 21:48:34.334: INFO: Trying to get logs from node jerma-worker pod var-expansion-db666624-31ee-42a6-917b-40a26f580b15 container dapi-container: STEP: delete the pod Mar 19 21:48:34.354: INFO: Waiting for pod var-expansion-db666624-31ee-42a6-917b-40a26f580b15 to disappear Mar 19 21:48:34.358: INFO: Pod var-expansion-db666624-31ee-42a6-917b-40a26f580b15 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:48:34.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5134" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2234,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:48:34.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:48:38.533: INFO: Waiting up to 5m0s for pod "client-envvars-d4b6cb80-3233-4c82-8006-344b97f09c43" in namespace "pods-1655" to be "success or failure" Mar 19 21:48:38.571: INFO: Pod "client-envvars-d4b6cb80-3233-4c82-8006-344b97f09c43": Phase="Pending", Reason="", readiness=false. Elapsed: 38.238525ms Mar 19 21:48:40.575: INFO: Pod "client-envvars-d4b6cb80-3233-4c82-8006-344b97f09c43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042201938s Mar 19 21:48:42.871: INFO: Pod "client-envvars-d4b6cb80-3233-4c82-8006-344b97f09c43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.337934981s STEP: Saw pod success Mar 19 21:48:42.871: INFO: Pod "client-envvars-d4b6cb80-3233-4c82-8006-344b97f09c43" satisfied condition "success or failure" Mar 19 21:48:42.874: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-d4b6cb80-3233-4c82-8006-344b97f09c43 container env3cont: STEP: delete the pod Mar 19 21:48:42.899: INFO: Waiting for pod client-envvars-d4b6cb80-3233-4c82-8006-344b97f09c43 to disappear Mar 19 21:48:42.904: INFO: Pod client-envvars-d4b6cb80-3233-4c82-8006-344b97f09c43 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:48:42.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1655" for this suite. • [SLOW TEST:8.545 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2257,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:48:42.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-57862b67-60fe-4383-9f76-13b927935bac STEP: Creating a pod to test consume configMaps Mar 19 21:48:43.015: INFO: Waiting up to 5m0s for pod "pod-configmaps-e11fc52f-5db6-404c-a17a-798669b98fcf" in namespace "configmap-5416" to be "success or failure" Mar 19 21:48:43.030: INFO: Pod "pod-configmaps-e11fc52f-5db6-404c-a17a-798669b98fcf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.77536ms Mar 19 21:48:45.044: INFO: Pod "pod-configmaps-e11fc52f-5db6-404c-a17a-798669b98fcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029403043s Mar 19 21:48:47.049: INFO: Pod "pod-configmaps-e11fc52f-5db6-404c-a17a-798669b98fcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033762099s STEP: Saw pod success Mar 19 21:48:47.049: INFO: Pod "pod-configmaps-e11fc52f-5db6-404c-a17a-798669b98fcf" satisfied condition "success or failure" Mar 19 21:48:47.052: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-e11fc52f-5db6-404c-a17a-798669b98fcf container configmap-volume-test: STEP: delete the pod Mar 19 21:48:47.124: INFO: Waiting for pod pod-configmaps-e11fc52f-5db6-404c-a17a-798669b98fcf to disappear Mar 19 21:48:47.131: INFO: Pod pod-configmaps-e11fc52f-5db6-404c-a17a-798669b98fcf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:48:47.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5416" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2277,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:48:47.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Mar 19 21:48:47.323: INFO: Waiting up to 5m0s for pod "var-expansion-3f14215b-8730-4a63-8f16-33c973218941" in namespace "var-expansion-8072" to be "success or failure" Mar 19 21:48:47.328: INFO: Pod "var-expansion-3f14215b-8730-4a63-8f16-33c973218941": Phase="Pending", Reason="", readiness=false. Elapsed: 5.379653ms Mar 19 21:48:49.332: INFO: Pod "var-expansion-3f14215b-8730-4a63-8f16-33c973218941": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009764688s Mar 19 21:48:51.337: INFO: Pod "var-expansion-3f14215b-8730-4a63-8f16-33c973218941": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014079713s STEP: Saw pod success Mar 19 21:48:51.337: INFO: Pod "var-expansion-3f14215b-8730-4a63-8f16-33c973218941" satisfied condition "success or failure" Mar 19 21:48:51.340: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-3f14215b-8730-4a63-8f16-33c973218941 container dapi-container: STEP: delete the pod Mar 19 21:48:51.400: INFO: Waiting for pod var-expansion-3f14215b-8730-4a63-8f16-33c973218941 to disappear Mar 19 21:48:51.406: INFO: Pod var-expansion-3f14215b-8730-4a63-8f16-33c973218941 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:48:51.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8072" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2282,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:48:51.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:49:02.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7947" for this suite. • [SLOW TEST:11.152 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":148,"skipped":2292,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:49:02.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8094 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 19 21:49:02.659: INFO: Found 0 stateful pods, waiting for 3 Mar 19 21:49:12.663: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 19 21:49:12.663: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 19 21:49:12.663: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 19 21:49:12.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8094 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 19 21:49:12.945: INFO: stderr: "I0319 21:49:12.812811 2935 log.go:172] (0xc000540e70) (0xc00064da40) Create stream\nI0319 21:49:12.812866 2935 log.go:172] (0xc000540e70) (0xc00064da40) Stream added, broadcasting: 1\nI0319 21:49:12.815773 2935 log.go:172] (0xc000540e70) Reply frame received for 1\nI0319 21:49:12.815820 2935 log.go:172] (0xc000540e70) (0xc00064dc20) Create stream\nI0319 21:49:12.815834 2935 log.go:172] (0xc000540e70) (0xc00064dc20) Stream added, broadcasting: 3\nI0319 21:49:12.816833 2935 log.go:172] (0xc000540e70) Reply frame received for 3\nI0319 21:49:12.816881 2935 log.go:172] (0xc000540e70) (0xc00064dcc0) Create stream\nI0319 21:49:12.816894 2935 log.go:172] (0xc000540e70) (0xc00064dcc0) Stream added, broadcasting: 5\nI0319 21:49:12.817930 2935 log.go:172] (0xc000540e70) Reply frame received for 5\nI0319 21:49:12.912805 2935 log.go:172] (0xc000540e70) Data frame received for 5\nI0319 21:49:12.912832 2935 log.go:172] (0xc00064dcc0) (5) Data frame handling\nI0319 21:49:12.912852 2935 log.go:172] (0xc00064dcc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0319 21:49:12.938528 2935 log.go:172] (0xc000540e70) Data frame received for 3\nI0319 21:49:12.938567 2935 log.go:172] (0xc00064dc20) (3) Data frame handling\nI0319 21:49:12.938585 2935 log.go:172] (0xc00064dc20) (3) Data frame sent\nI0319 21:49:12.938615 2935 log.go:172] (0xc000540e70) Data frame received for 3\nI0319 21:49:12.938628 2935 log.go:172] (0xc00064dc20) (3) Data frame handling\nI0319 21:49:12.938659 2935 log.go:172] (0xc000540e70) Data frame received for 5\nI0319 21:49:12.938691 2935 log.go:172] (0xc00064dcc0) (5) Data frame handling\nI0319 21:49:12.940636 2935 log.go:172] (0xc000540e70) Data frame received for 1\nI0319 21:49:12.940671 2935 log.go:172] (0xc00064da40) (1) Data frame handling\nI0319 21:49:12.940702 2935 log.go:172] (0xc00064da40) (1) Data frame sent\nI0319 21:49:12.940727 2935 log.go:172] (0xc000540e70) (0xc00064da40) Stream removed, broadcasting: 1\nI0319 21:49:12.940836 2935 log.go:172] (0xc000540e70) Go away received\nI0319 21:49:12.941439 2935 log.go:172] (0xc000540e70) (0xc00064da40) Stream removed, broadcasting: 1\nI0319 21:49:12.941467 2935 log.go:172] (0xc000540e70) (0xc00064dc20) Stream removed, broadcasting: 3\nI0319 21:49:12.941480 2935 log.go:172] (0xc000540e70) (0xc00064dcc0) Stream removed, broadcasting: 5\n" Mar 19 21:49:12.945: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 19 21:49:12.945: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 19 21:49:22.978: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 19 21:49:33.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8094 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 19 21:49:33.258: INFO: stderr: "I0319 21:49:33.168968 2958 log.go:172] (0xc0005d4840) (0xc0005ce000) Create stream\nI0319 21:49:33.169022 2958 log.go:172] (0xc0005d4840) (0xc0005ce000) Stream added, broadcasting: 1\nI0319 21:49:33.172112 2958 log.go:172] (0xc0005d4840) Reply frame received for 1\nI0319 21:49:33.172180 2958 log.go:172] (0xc0005d4840) (0xc0006219a0) Create stream\nI0319 21:49:33.172215 2958 log.go:172] (0xc0005d4840) (0xc0006219a0) Stream added, broadcasting: 3\nI0319 21:49:33.173326 2958 log.go:172] (0xc0005d4840) Reply frame received for 3\nI0319 21:49:33.173361 2958 log.go:172] (0xc0005d4840) (0xc0001fc000) Create stream\nI0319 21:49:33.173372 2958 log.go:172] (0xc0005d4840) (0xc0001fc000) Stream added, broadcasting: 5\nI0319 21:49:33.174262 2958 log.go:172] (0xc0005d4840) Reply frame received for 5\nI0319 21:49:33.252751 2958 log.go:172] (0xc0005d4840) Data frame received for 5\nI0319 21:49:33.252787 2958 log.go:172] (0xc0001fc000) (5) Data frame handling\nI0319 21:49:33.252802 2958 log.go:172] (0xc0001fc000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0319 21:49:33.252825 2958 log.go:172] (0xc0005d4840) Data frame received for 5\nI0319 21:49:33.252880 2958 log.go:172] (0xc0001fc000) (5) Data frame handling\nI0319 21:49:33.252915 2958 log.go:172] (0xc0005d4840) Data frame received for 3\nI0319 21:49:33.252953 2958 log.go:172] (0xc0006219a0) (3) Data frame handling\nI0319 21:49:33.252990 2958 log.go:172] (0xc0006219a0) (3) Data frame sent\nI0319 21:49:33.253017 2958 log.go:172] (0xc0005d4840) Data frame received for 3\nI0319 21:49:33.253036 2958 log.go:172] (0xc0006219a0) (3) Data frame handling\nI0319 21:49:33.254720 2958 log.go:172] (0xc0005d4840) Data frame received for 1\nI0319 21:49:33.254741 2958 log.go:172] (0xc0005ce000) (1) Data frame handling\nI0319 21:49:33.254752 2958 log.go:172] (0xc0005ce000) (1) Data frame sent\nI0319 21:49:33.254763 2958 log.go:172] (0xc0005d4840) (0xc0005ce000) Stream removed, broadcasting: 1\nI0319 21:49:33.254783 2958 log.go:172] (0xc0005d4840) Go away received\nI0319 21:49:33.255050 2958 log.go:172] (0xc0005d4840) (0xc0005ce000) Stream removed, broadcasting: 1\nI0319 21:49:33.255065 2958 log.go:172] (0xc0005d4840) (0xc0006219a0) Stream removed, broadcasting: 3\nI0319 21:49:33.255072 2958 log.go:172] (0xc0005d4840) (0xc0001fc000) Stream removed, broadcasting: 5\n" Mar 19 21:49:33.258: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 19 21:49:33.258: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 19 21:49:53.279: INFO: Waiting for StatefulSet statefulset-8094/ss2 to complete update Mar 19 21:49:53.279: INFO: Waiting for Pod statefulset-8094/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Mar 19 21:50:03.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8094 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 19 21:50:03.535: INFO: stderr: "I0319 21:50:03.416346 2980 log.go:172] (0xc0009bef20) (0xc000918460) Create stream\nI0319 21:50:03.416399 2980 log.go:172] (0xc0009bef20) (0xc000918460) Stream added, broadcasting: 1\nI0319 21:50:03.421777 2980 log.go:172] (0xc0009bef20) Reply frame received for 1\nI0319 21:50:03.421816 2980 log.go:172] (0xc0009bef20) (0xc0005f2640) Create stream\nI0319 21:50:03.421827 2980 log.go:172] (0xc0009bef20) (0xc0005f2640) Stream added, broadcasting: 3\nI0319 21:50:03.422866 2980 log.go:172] (0xc0009bef20) Reply frame received for 3\nI0319 21:50:03.422905 2980 log.go:172] (0xc0009bef20) (0xc0003cb400) Create stream\nI0319 21:50:03.422919 2980 log.go:172] (0xc0009bef20) (0xc0003cb400) Stream added, broadcasting: 5\nI0319 21:50:03.423764 2980 log.go:172] (0xc0009bef20) Reply frame received for 5\nI0319 21:50:03.498757 2980 log.go:172] (0xc0009bef20) Data frame received for 5\nI0319 21:50:03.498784 2980 log.go:172] (0xc0003cb400) (5) Data frame handling\nI0319 21:50:03.498803 2980 log.go:172] (0xc0003cb400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0319 21:50:03.528814 2980 log.go:172] (0xc0009bef20) Data frame received for 3\nI0319 21:50:03.528846 2980 log.go:172] (0xc0005f2640) (3) Data frame handling\nI0319 21:50:03.528864 2980 log.go:172] (0xc0005f2640) (3) Data frame sent\nI0319 21:50:03.528879 2980 log.go:172] (0xc0009bef20) Data frame received for 3\nI0319 21:50:03.528893 2980 log.go:172] (0xc0005f2640) (3) Data frame handling\nI0319 21:50:03.528935 2980 log.go:172] (0xc0009bef20) Data frame received for 5\nI0319 21:50:03.528954 2980 log.go:172] (0xc0003cb400) (5) Data frame handling\nI0319 21:50:03.531201 2980 log.go:172] (0xc0009bef20) Data frame received for 1\nI0319 21:50:03.531231 2980 log.go:172] (0xc000918460) (1) Data frame handling\nI0319 21:50:03.531249 2980 log.go:172] (0xc000918460) (1) Data frame sent\nI0319 21:50:03.531266 2980 log.go:172] (0xc0009bef20) (0xc000918460) Stream removed, broadcasting: 1\nI0319 21:50:03.531331 2980 log.go:172] (0xc0009bef20) Go away received\nI0319 21:50:03.531567 2980 log.go:172] (0xc0009bef20) (0xc000918460) Stream removed, broadcasting: 1\nI0319 21:50:03.531585 2980 log.go:172] (0xc0009bef20) (0xc0005f2640) Stream removed, broadcasting: 3\nI0319 21:50:03.531595 2980 log.go:172] (0xc0009bef20) (0xc0003cb400) Stream removed, broadcasting: 5\n" Mar 19 21:50:03.536: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 19 21:50:03.536: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 19 21:50:13.568: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 19 21:50:23.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8094 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 19 21:50:23.840: INFO: stderr: "I0319 21:50:23.754671 2999 log.go:172] (0xc000a4e0b0) (0xc0007634a0) Create stream\nI0319 21:50:23.754742 2999 log.go:172] (0xc000a4e0b0) (0xc0007634a0) Stream added, broadcasting: 1\nI0319 21:50:23.756953 2999 log.go:172] (0xc000a4e0b0) Reply frame received for 1\nI0319 21:50:23.757012 2999 log.go:172] (0xc000a4e0b0) (0xc00095c000) Create stream\nI0319 21:50:23.757031 2999 log.go:172] (0xc000a4e0b0) (0xc00095c000) Stream added, broadcasting: 3\nI0319 21:50:23.758019 2999 log.go:172] (0xc000a4e0b0) Reply frame received for 3\nI0319 21:50:23.758064 2999 log.go:172] (0xc000a4e0b0) (0xc00062da40) Create stream\nI0319 21:50:23.758080 2999 log.go:172] (0xc000a4e0b0) (0xc00062da40) Stream added, broadcasting: 5\nI0319 21:50:23.758921 2999 log.go:172] (0xc000a4e0b0) Reply frame received for 5\nI0319 21:50:23.833254 2999 log.go:172] (0xc000a4e0b0) Data frame received for 5\nI0319 21:50:23.833293 2999 log.go:172] (0xc00062da40) (5) Data frame handling\nI0319 21:50:23.833307 2999 log.go:172] (0xc00062da40) (5) Data frame sent\nI0319 21:50:23.833317 2999 log.go:172] (0xc000a4e0b0) Data frame received for 5\nI0319 21:50:23.833324 2999 log.go:172] (0xc00062da40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0319 21:50:23.833518 2999 log.go:172] (0xc000a4e0b0) Data frame received for 3\nI0319 21:50:23.833545 2999 log.go:172] (0xc00095c000) (3) Data frame handling\nI0319 21:50:23.833567 2999 log.go:172] (0xc00095c000) (3) Data frame sent\nI0319 21:50:23.833592 2999 log.go:172] (0xc000a4e0b0) Data frame received for 3\nI0319 21:50:23.833602 2999 log.go:172] (0xc00095c000) (3) Data frame handling\nI0319 21:50:23.835221 2999 log.go:172] (0xc000a4e0b0) Data frame received for 1\nI0319 21:50:23.835243 2999 log.go:172] (0xc0007634a0) (1) Data frame handling\nI0319 21:50:23.835256 2999 log.go:172] (0xc0007634a0) (1) Data frame sent\nI0319 21:50:23.835387 2999 log.go:172] (0xc000a4e0b0) (0xc0007634a0) Stream removed, broadcasting: 1\nI0319 21:50:23.835754 2999 log.go:172] (0xc000a4e0b0) (0xc0007634a0) Stream removed, broadcasting: 1\nI0319 21:50:23.835775 2999 log.go:172] (0xc000a4e0b0) (0xc00095c000) Stream removed, broadcasting: 3\nI0319 21:50:23.835787 2999 log.go:172] (0xc000a4e0b0) (0xc00062da40) Stream removed, broadcasting: 5\n" Mar 19 21:50:23.840: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 19 21:50:23.840: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 19 21:50:43.856: INFO: Waiting for StatefulSet statefulset-8094/ss2 to complete update Mar 19 21:50:43.856: INFO: Waiting for Pod statefulset-8094/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 19 21:50:53.897: INFO: Deleting all statefulset in ns statefulset-8094 Mar 19 21:50:53.900: INFO: Scaling statefulset ss2 to 0 Mar 19 21:51:13.931: INFO: Waiting for statefulset status.replicas updated to 0 Mar 19 21:51:13.934: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:51:13.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8094" for this suite. • [SLOW TEST:131.405 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":149,"skipped":2294,"failed":0} SSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:51:13.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:51:14.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9152" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":150,"skipped":2302,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:51:14.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 19 21:51:14.201: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb8f529c-29d3-48c5-9048-8e40b7753cd5" in namespace "projected-9355" to be "success or failure" Mar 19 21:51:14.232: INFO: Pod "downwardapi-volume-eb8f529c-29d3-48c5-9048-8e40b7753cd5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.607087ms Mar 19 21:51:16.290: INFO: Pod "downwardapi-volume-eb8f529c-29d3-48c5-9048-8e40b7753cd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088873397s Mar 19 21:51:18.294: INFO: Pod "downwardapi-volume-eb8f529c-29d3-48c5-9048-8e40b7753cd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092985797s STEP: Saw pod success Mar 19 21:51:18.294: INFO: Pod "downwardapi-volume-eb8f529c-29d3-48c5-9048-8e40b7753cd5" satisfied condition "success or failure" Mar 19 21:51:18.297: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-eb8f529c-29d3-48c5-9048-8e40b7753cd5 container client-container: STEP: delete the pod Mar 19 21:51:18.346: INFO: Waiting for pod downwardapi-volume-eb8f529c-29d3-48c5-9048-8e40b7753cd5 to disappear Mar 19 21:51:18.348: INFO: Pod downwardapi-volume-eb8f529c-29d3-48c5-9048-8e40b7753cd5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:51:18.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9355" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:51:18.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:51:35.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7008" for this suite. • [SLOW TEST:17.163 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":152,"skipped":2355,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:51:35.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 19 21:51:35.586: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 19 21:51:44.627: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:51:44.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1998" for this suite. • [SLOW TEST:9.117 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2423,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:51:44.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6746 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 19 21:51:44.699: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 19 21:52:06.817: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.18 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6746 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 21:52:06.817: INFO: >>> kubeConfig: /root/.kube/config I0319 21:52:06.851172 6 log.go:172] (0xc005230bb0) (0xc00234e780) Create stream I0319 21:52:06.851203 6 log.go:172] (0xc005230bb0) (0xc00234e780) Stream added, broadcasting: 1 I0319 21:52:06.853671 6 log.go:172] (0xc005230bb0) Reply frame received for 1 I0319 21:52:06.853719 6 log.go:172] (0xc005230bb0) (0xc001f72500) Create stream I0319 21:52:06.853732 6 log.go:172] (0xc005230bb0) (0xc001f72500) Stream added, broadcasting: 3 I0319 21:52:06.855243 6 log.go:172] (0xc005230bb0) Reply frame received for 3 I0319 21:52:06.855285 6 log.go:172] (0xc005230bb0) (0xc001f72640) Create stream I0319 21:52:06.855301 6 log.go:172] (0xc005230bb0) (0xc001f72640) Stream added, broadcasting: 5 I0319 21:52:06.856439 6 log.go:172] (0xc005230bb0) Reply frame received for 5 I0319 21:52:07.937480 6 log.go:172] (0xc005230bb0) Data frame received for 3 I0319 21:52:07.937604 6 log.go:172] (0xc001f72500) (3) Data frame handling I0319 21:52:07.937740 6 log.go:172] (0xc001f72500) (3) Data frame sent I0319 21:52:07.937795 6 log.go:172] (0xc005230bb0) Data frame received for 5 I0319 21:52:07.937833 6 log.go:172] (0xc001f72640) (5) Data frame handling I0319 21:52:07.937860 6 log.go:172] (0xc005230bb0) Data frame received for 3 I0319 21:52:07.937970 6 log.go:172] (0xc001f72500) (3) Data frame handling I0319 21:52:07.940663 6 log.go:172] (0xc005230bb0) Data frame received for 1 I0319 21:52:07.940706 6 log.go:172] (0xc00234e780) (1) Data frame handling I0319 21:52:07.940743 6 log.go:172] (0xc00234e780) (1) Data frame sent I0319 21:52:07.940778 6 log.go:172] (0xc005230bb0) (0xc00234e780) Stream removed, broadcasting: 1 I0319 21:52:07.940880 6 log.go:172] (0xc005230bb0) Go away received I0319 21:52:07.940940 6 log.go:172] (0xc005230bb0) (0xc00234e780) Stream removed, broadcasting: 1 I0319 21:52:07.940996 6 log.go:172] (0xc005230bb0) (0xc001f72500) Stream removed, broadcasting: 3 I0319 21:52:07.941028 6 log.go:172] (0xc005230bb0) (0xc001f72640) Stream removed, broadcasting: 5 Mar 19 21:52:07.941: INFO: Found all expected endpoints: [netserver-0] Mar 19 21:52:07.945: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.54 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6746 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 21:52:07.945: INFO: >>> kubeConfig: /root/.kube/config I0319 21:52:07.978075 6 log.go:172] (0xc005231130) (0xc00234f0e0) Create stream I0319 21:52:07.978105 6 log.go:172] (0xc005231130) (0xc00234f0e0) Stream added, broadcasting: 1 I0319 21:52:07.979991 6 log.go:172] (0xc005231130) Reply frame received for 1 I0319 21:52:07.980038 6 log.go:172] (0xc005231130) (0xc0012f1d60) Create stream I0319 21:52:07.980051 6 log.go:172] (0xc005231130) (0xc0012f1d60) Stream added, broadcasting: 3 I0319 21:52:07.980993 6 log.go:172] (0xc005231130) Reply frame received for 3 I0319 21:52:07.981037 6 log.go:172] (0xc005231130) (0xc00113f720) Create stream I0319 21:52:07.981053 6 log.go:172] (0xc005231130) (0xc00113f720) Stream added, broadcasting: 5 I0319 21:52:07.982168 6 log.go:172] (0xc005231130) Reply frame received for 5 I0319 21:52:09.050059 6 log.go:172] (0xc005231130) Data frame received for 5 I0319 21:52:09.050113 6 log.go:172] (0xc00113f720) (5) Data frame handling I0319 21:52:09.050148 6 log.go:172] (0xc005231130) Data frame received for 3 I0319 21:52:09.050170 6 log.go:172] (0xc0012f1d60) (3) Data frame handling I0319 21:52:09.050249 6 log.go:172] (0xc0012f1d60) (3) Data frame sent I0319 21:52:09.050276 6 log.go:172] (0xc005231130) Data frame received for 3 I0319 21:52:09.050299 6 log.go:172] (0xc0012f1d60) (3) Data frame handling I0319 21:52:09.052092 6 log.go:172] (0xc005231130) Data frame received for 1 I0319 21:52:09.052109 6 log.go:172] (0xc00234f0e0) (1) Data frame handling I0319 21:52:09.052127 6 log.go:172] (0xc00234f0e0) (1) Data frame sent I0319 21:52:09.052160 6 log.go:172] (0xc005231130) (0xc00234f0e0) Stream removed, broadcasting: 1 I0319 21:52:09.052250 6 log.go:172] (0xc005231130) Go away received I0319 21:52:09.052283 6 log.go:172] (0xc005231130) (0xc00234f0e0) Stream removed, broadcasting: 1 I0319 21:52:09.052295 6 log.go:172] (0xc005231130) (0xc0012f1d60) Stream removed, broadcasting: 3 I0319 21:52:09.052322 6 log.go:172] (0xc005231130) (0xc00113f720) Stream removed, broadcasting: 5 Mar 19 21:52:09.052: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:52:09.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6746" for this suite. • [SLOW TEST:24.423 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2430,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:52:09.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 19 21:52:10.043: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 19 21:52:12.053: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251530, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251530, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251530, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251530, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 21:52:15.102: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 19 21:52:19.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-7299 to-be-attached-pod -i -c=container1' Mar 19 21:52:19.324: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:52:19.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7299" for this suite. STEP: Destroying namespace "webhook-7299-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.337 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":155,"skipped":2444,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:52:19.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 19 21:52:19.494: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:52:19.520: INFO: Number of nodes with available pods: 0 Mar 19 21:52:19.520: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:52:20.617: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:52:20.620: INFO: Number of nodes with available pods: 0 Mar 19 21:52:20.621: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:52:21.689: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:52:21.692: INFO: Number of nodes with available pods: 0 Mar 19 21:52:21.692: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:52:22.525: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:52:22.528: INFO: Number of nodes with available pods: 0 Mar 19 21:52:22.528: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:52:23.581: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:52:23.584: INFO: Number of nodes with available pods: 0 Mar 19 21:52:23.584: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:52:24.552: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:52:24.555: INFO: Number of nodes with available pods: 2 Mar 19 21:52:24.555: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 19 21:52:24.599: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:52:24.610: INFO: Number of nodes with available pods: 1 Mar 19 21:52:24.610: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:52:25.628: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:52:25.791: INFO: Number of nodes with available pods: 1 Mar 19 21:52:25.791: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:52:26.629: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:52:26.632: INFO: Number of nodes with available pods: 1 Mar 19 21:52:26.632: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:52:27.707: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:52:27.711: INFO: Number of nodes with available pods: 1 Mar 19 21:52:27.711: INFO: Node jerma-worker is running more than one daemon pod Mar 19 21:52:28.635: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 21:52:28.639: INFO: Number of nodes with available pods: 2 Mar 19 21:52:28.639: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4772, will wait for the garbage collector to delete the pods Mar 19 21:52:28.704: INFO: Deleting DaemonSet.extensions daemon-set took: 6.361318ms Mar 19 21:52:28.804: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.261314ms Mar 19 21:52:39.608: INFO: Number of nodes with available pods: 0 Mar 19 21:52:39.608: INFO: Number of running nodes: 0, number of available pods: 0 Mar 19 21:52:39.611: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4772/daemonsets","resourceVersion":"1124513"},"items":null} Mar 19 21:52:39.613: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4772/pods","resourceVersion":"1124513"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:52:39.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4772" for this suite. • [SLOW TEST:20.249 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":156,"skipped":2449,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:52:39.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 19 21:52:39.690: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:52:46.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9685" for this suite. • [SLOW TEST:6.798 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":157,"skipped":2499,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:52:46.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Mar 19 21:52:46.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 19 21:52:46.903: INFO: stderr: "" Mar 19 21:52:46.903: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:52:46.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-829" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":158,"skipped":2513,"failed":0} ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:52:46.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 19 21:52:46.999: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 19 21:52:52.022: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:52:52.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8373" for this suite. • [SLOW TEST:5.211 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":159,"skipped":2513,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:52:52.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 19 21:52:52.706: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 19 21:52:54.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251572, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251572, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251572, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251572, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 21:52:57.744: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:52:58.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2974" for this suite. STEP: Destroying namespace "webhook-2974-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.447 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":160,"skipped":2520,"failed":0} SSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:52:58.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-50efeb05-443d-49b5-9b6b-b21419be8314 STEP: Creating secret with name s-test-opt-upd-938b15fb-716f-4b02-88f3-3ed01e748be5 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-50efeb05-443d-49b5-9b6b-b21419be8314 STEP: Updating secret s-test-opt-upd-938b15fb-716f-4b02-88f3-3ed01e748be5 STEP: Creating secret with name s-test-opt-create-41f3259f-5e84-4d2c-bc18-8084c6f67c54 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:54:25.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1981" for this suite. • [SLOW TEST:86.787 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2523,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:54:25.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7587.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7587.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7587.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7587.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7587.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7587.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7587.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7587.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7587.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7587.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 19 21:54:31.483: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:31.488: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:31.494: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:31.500: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:31.503: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:31.507: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:31.528: INFO: Lookups using dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7587.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7587.svc.cluster.local] Mar 19 21:54:36.532: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:36.534: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:36.549: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:36.551: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:36.562: INFO: Lookups using dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local] Mar 19 21:54:41.532: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:41.535: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:41.550: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:41.552: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:41.565: INFO: Lookups using dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local] Mar 19 21:54:46.533: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:46.537: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:46.554: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:46.557: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:46.570: INFO: Lookups using dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local] Mar 19 21:54:51.534: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:51.538: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:51.553: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:51.556: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:51.567: INFO: Lookups using dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local] Mar 19 21:54:56.534: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:56.538: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:56.555: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:56.557: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local from pod dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0: the server could not find the requested resource (get pods dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0) Mar 19 21:54:56.570: INFO: Lookups using dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7587.svc.cluster.local] Mar 19 21:55:01.569: INFO: DNS probes using dns-7587/dns-test-f248727d-2e90-44a9-8c17-226bc818f0f0 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:55:01.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7587" for this suite. • [SLOW TEST:36.518 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":162,"skipped":2548,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:55:01.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Mar 19 21:55:02.526: INFO: Waiting up to 5m0s for pod "client-containers-0fefa0b9-0046-495c-a742-791968ec4010" in namespace "containers-5106" to be "success or failure" Mar 19 21:55:02.530: INFO: Pod "client-containers-0fefa0b9-0046-495c-a742-791968ec4010": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308958ms Mar 19 21:55:04.533: INFO: Pod "client-containers-0fefa0b9-0046-495c-a742-791968ec4010": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007391707s Mar 19 21:55:06.577: INFO: Pod "client-containers-0fefa0b9-0046-495c-a742-791968ec4010": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051150739s STEP: Saw pod success Mar 19 21:55:06.577: INFO: Pod "client-containers-0fefa0b9-0046-495c-a742-791968ec4010" satisfied condition "success or failure" Mar 19 21:55:06.580: INFO: Trying to get logs from node jerma-worker2 pod client-containers-0fefa0b9-0046-495c-a742-791968ec4010 container test-container: STEP: delete the pod Mar 19 21:55:06.607: INFO: Waiting for pod client-containers-0fefa0b9-0046-495c-a742-791968ec4010 to disappear Mar 19 21:55:06.612: INFO: Pod client-containers-0fefa0b9-0046-495c-a742-791968ec4010 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:55:06.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5106" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2569,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:55:06.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 19 21:55:07.218: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 19 21:55:09.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251707, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251707, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251707, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251707, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 21:55:12.256: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:55:12.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:55:13.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6665" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.942 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":164,"skipped":2618,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:55:13.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 19 21:55:13.626: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a9ffb669-be88-4a0c-912b-907aaa2826ac" in namespace "projected-9812" to be "success or failure" Mar 19 21:55:13.644: INFO: Pod "downwardapi-volume-a9ffb669-be88-4a0c-912b-907aaa2826ac": Phase="Pending", Reason="", readiness=false. Elapsed: 18.469948ms Mar 19 21:55:15.667: INFO: Pod "downwardapi-volume-a9ffb669-be88-4a0c-912b-907aaa2826ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041021768s Mar 19 21:55:17.671: INFO: Pod "downwardapi-volume-a9ffb669-be88-4a0c-912b-907aaa2826ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045363006s STEP: Saw pod success Mar 19 21:55:17.671: INFO: Pod "downwardapi-volume-a9ffb669-be88-4a0c-912b-907aaa2826ac" satisfied condition "success or failure" Mar 19 21:55:17.675: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a9ffb669-be88-4a0c-912b-907aaa2826ac container client-container: STEP: delete the pod Mar 19 21:55:17.692: INFO: Waiting for pod downwardapi-volume-a9ffb669-be88-4a0c-912b-907aaa2826ac to disappear Mar 19 21:55:17.696: INFO: Pod downwardapi-volume-a9ffb669-be88-4a0c-912b-907aaa2826ac no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:55:17.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9812" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2622,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:55:17.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1733 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 19 21:55:17.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-3406' Mar 19 21:55:17.860: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 19 21:55:17.860: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1738 Mar 19 21:55:21.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3406' Mar 19 21:55:22.020: INFO: stderr: "" Mar 19 21:55:22.020: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:55:22.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3406" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":166,"skipped":2658,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:55:22.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-bchwl in namespace proxy-6671 I0319 21:55:23.272773 6 runners.go:189] Created replication controller with name: proxy-service-bchwl, namespace: proxy-6671, replica count: 1 I0319 21:55:24.323279 6 runners.go:189] proxy-service-bchwl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0319 21:55:25.323506 6 runners.go:189] proxy-service-bchwl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0319 21:55:26.323771 6 runners.go:189] proxy-service-bchwl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0319 21:55:27.324029 6 runners.go:189] proxy-service-bchwl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0319 21:55:28.324313 6 runners.go:189] proxy-service-bchwl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0319 21:55:29.324552 6 runners.go:189] proxy-service-bchwl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0319 21:55:30.324783 6 runners.go:189] proxy-service-bchwl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0319 21:55:31.325087 6 runners.go:189] proxy-service-bchwl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0319 21:55:32.325458 6 runners.go:189] proxy-service-bchwl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0319 21:55:33.325701 6 runners.go:189] proxy-service-bchwl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0319 21:55:34.325926 6 runners.go:189] proxy-service-bchwl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0319 21:55:35.326154 6 runners.go:189] proxy-service-bchwl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0319 21:55:36.326421 6 runners.go:189] proxy-service-bchwl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 19 21:55:36.330: INFO: setup took 13.819389672s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 19 21:55:36.336: INFO: (0) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:160/proxy/: foo (200; 5.612151ms) Mar 19 21:55:36.337: INFO: (0) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6/proxy/: test (200; 6.698166ms) Mar 19 21:55:36.339: INFO: (0) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:1080/proxy/: ... (200; 8.686358ms) Mar 19 21:55:36.340: INFO: (0) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:160/proxy/: foo (200; 9.500737ms) Mar 19 21:55:36.340: INFO: (0) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:162/proxy/: bar (200; 9.412278ms) Mar 19 21:55:36.341: INFO: (0) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:1080/proxy/: test<... (200; 11.024127ms) Mar 19 21:55:36.341: INFO: (0) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:162/proxy/: bar (200; 11.214716ms) Mar 19 21:55:36.344: INFO: (0) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:462/proxy/: tls qux (200; 13.145968ms) Mar 19 21:55:36.344: INFO: (0) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:460/proxy/: tls baz (200; 13.466042ms) Mar 19 21:55:36.344: INFO: (0) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname1/proxy/: foo (200; 14.084823ms) Mar 19 21:55:36.345: INFO: (0) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname2/proxy/: bar (200; 14.342781ms) Mar 19 21:55:36.346: INFO: (0) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname1/proxy/: tls baz (200; 15.651416ms) Mar 19 21:55:36.346: INFO: (0) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname2/proxy/: bar (200; 15.634232ms) Mar 19 21:55:36.346: INFO: (0) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname1/proxy/: foo (200; 15.747728ms) Mar 19 21:55:36.346: INFO: (0) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname2/proxy/: tls qux (200; 15.66332ms) Mar 19 21:55:36.347: INFO: (0) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:443/proxy/: test<... (200; 2.785858ms) Mar 19 21:55:36.350: INFO: (1) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:1080/proxy/: ... (200; 2.862845ms) Mar 19 21:55:36.350: INFO: (1) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:160/proxy/: foo (200; 3.219943ms) Mar 19 21:55:36.351: INFO: (1) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:462/proxy/: tls qux (200; 3.573219ms) Mar 19 21:55:36.351: INFO: (1) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6/proxy/: test (200; 4.121506ms) Mar 19 21:55:36.351: INFO: (1) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname2/proxy/: tls qux (200; 4.197486ms) Mar 19 21:55:36.351: INFO: (1) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:162/proxy/: bar (200; 4.27928ms) Mar 19 21:55:36.351: INFO: (1) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:443/proxy/: test (200; 3.548175ms) Mar 19 21:55:36.356: INFO: (2) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:460/proxy/: tls baz (200; 3.520643ms) Mar 19 21:55:36.356: INFO: (2) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:1080/proxy/: ... (200; 3.575939ms) Mar 19 21:55:36.356: INFO: (2) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:162/proxy/: bar (200; 4.127146ms) Mar 19 21:55:36.356: INFO: (2) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:1080/proxy/: test<... (200; 4.089401ms) Mar 19 21:55:36.356: INFO: (2) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:160/proxy/: foo (200; 4.246797ms) Mar 19 21:55:36.357: INFO: (2) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:162/proxy/: bar (200; 4.251748ms) Mar 19 21:55:36.358: INFO: (2) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname2/proxy/: tls qux (200; 5.70973ms) Mar 19 21:55:36.358: INFO: (2) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname2/proxy/: bar (200; 5.825935ms) Mar 19 21:55:36.358: INFO: (2) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname1/proxy/: tls baz (200; 5.866671ms) Mar 19 21:55:36.358: INFO: (2) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname2/proxy/: bar (200; 5.872678ms) Mar 19 21:55:36.358: INFO: (2) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname1/proxy/: foo (200; 5.849148ms) Mar 19 21:55:36.359: INFO: (2) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname1/proxy/: foo (200; 6.451415ms) Mar 19 21:55:36.362: INFO: (3) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:162/proxy/: bar (200; 3.402548ms) Mar 19 21:55:36.362: INFO: (3) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:1080/proxy/: ... (200; 3.604786ms) Mar 19 21:55:36.363: INFO: (3) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:160/proxy/: foo (200; 3.969828ms) Mar 19 21:55:36.363: INFO: (3) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6/proxy/: test (200; 4.178666ms) Mar 19 21:55:36.363: INFO: (3) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:160/proxy/: foo (200; 4.186358ms) Mar 19 21:55:36.363: INFO: (3) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:462/proxy/: tls qux (200; 4.216458ms) Mar 19 21:55:36.363: INFO: (3) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:162/proxy/: bar (200; 4.24955ms) Mar 19 21:55:36.363: INFO: (3) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:460/proxy/: tls baz (200; 4.385617ms) Mar 19 21:55:36.363: INFO: (3) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:443/proxy/: test<... (200; 4.516022ms) Mar 19 21:55:36.363: INFO: (3) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname1/proxy/: tls baz (200; 4.723606ms) Mar 19 21:55:36.364: INFO: (3) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname2/proxy/: bar (200; 5.107345ms) Mar 19 21:55:36.364: INFO: (3) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname1/proxy/: foo (200; 5.166168ms) Mar 19 21:55:36.364: INFO: (3) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname1/proxy/: foo (200; 5.360771ms) Mar 19 21:55:36.364: INFO: (3) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname2/proxy/: tls qux (200; 5.444374ms) Mar 19 21:55:36.364: INFO: (3) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname2/proxy/: bar (200; 5.442678ms) Mar 19 21:55:36.367: INFO: (4) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:160/proxy/: foo (200; 3.044096ms) Mar 19 21:55:36.368: INFO: (4) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:162/proxy/: bar (200; 3.277945ms) Mar 19 21:55:36.368: INFO: (4) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:1080/proxy/: test<... (200; 3.977355ms) Mar 19 21:55:36.368: INFO: (4) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:1080/proxy/: ... (200; 3.926837ms) Mar 19 21:55:36.368: INFO: (4) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:462/proxy/: tls qux (200; 3.952571ms) Mar 19 21:55:36.368: INFO: (4) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6/proxy/: test (200; 3.98917ms) Mar 19 21:55:36.368: INFO: (4) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:162/proxy/: bar (200; 3.993409ms) Mar 19 21:55:36.368: INFO: (4) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname2/proxy/: bar (200; 4.011843ms) Mar 19 21:55:36.368: INFO: (4) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:443/proxy/: ... (200; 2.220747ms) Mar 19 21:55:36.372: INFO: (5) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:460/proxy/: tls baz (200; 2.868638ms) Mar 19 21:55:36.372: INFO: (5) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6/proxy/: test (200; 3.139601ms) Mar 19 21:55:36.373: INFO: (5) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:462/proxy/: tls qux (200; 3.769045ms) Mar 19 21:55:36.373: INFO: (5) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:1080/proxy/: test<... (200; 3.81977ms) Mar 19 21:55:36.373: INFO: (5) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:160/proxy/: foo (200; 3.876957ms) Mar 19 21:55:36.373: INFO: (5) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:160/proxy/: foo (200; 3.884784ms) Mar 19 21:55:36.373: INFO: (5) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:162/proxy/: bar (200; 3.974785ms) Mar 19 21:55:36.373: INFO: (5) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname2/proxy/: tls qux (200; 4.108541ms) Mar 19 21:55:36.373: INFO: (5) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:443/proxy/: test<... (200; 3.226043ms) Mar 19 21:55:36.377: INFO: (6) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:160/proxy/: foo (200; 3.207878ms) Mar 19 21:55:36.378: INFO: (6) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:162/proxy/: bar (200; 3.602562ms) Mar 19 21:55:36.378: INFO: (6) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:1080/proxy/: ... (200; 3.67187ms) Mar 19 21:55:36.378: INFO: (6) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:462/proxy/: tls qux (200; 3.694772ms) Mar 19 21:55:36.378: INFO: (6) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6/proxy/: test (200; 3.700347ms) Mar 19 21:55:36.378: INFO: (6) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:160/proxy/: foo (200; 3.769541ms) Mar 19 21:55:36.378: INFO: (6) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname2/proxy/: bar (200; 3.766709ms) Mar 19 21:55:36.378: INFO: (6) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname2/proxy/: tls qux (200; 4.184182ms) Mar 19 21:55:36.378: INFO: (6) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:443/proxy/: test (200; 2.182289ms) Mar 19 21:55:36.382: INFO: (7) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:160/proxy/: foo (200; 2.878238ms) Mar 19 21:55:36.382: INFO: (7) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:1080/proxy/: test<... (200; 2.885655ms) Mar 19 21:55:36.383: INFO: (7) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname1/proxy/: foo (200; 4.342251ms) Mar 19 21:55:36.384: INFO: (7) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname1/proxy/: tls baz (200; 4.814392ms) Mar 19 21:55:36.384: INFO: (7) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname2/proxy/: bar (200; 4.850083ms) Mar 19 21:55:36.384: INFO: (7) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:460/proxy/: tls baz (200; 4.97314ms) Mar 19 21:55:36.384: INFO: (7) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname1/proxy/: foo (200; 5.135457ms) Mar 19 21:55:36.384: INFO: (7) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:160/proxy/: foo (200; 5.260465ms) Mar 19 21:55:36.384: INFO: (7) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:443/proxy/: ... (200; 5.484647ms) Mar 19 21:55:36.385: INFO: (7) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:162/proxy/: bar (200; 5.565625ms) Mar 19 21:55:36.385: INFO: (7) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname2/proxy/: tls qux (200; 5.798765ms) Mar 19 21:55:36.385: INFO: (7) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname2/proxy/: bar (200; 5.821656ms) Mar 19 21:55:36.385: INFO: (7) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:462/proxy/: tls qux (200; 5.756162ms) Mar 19 21:55:36.388: INFO: (8) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6/proxy/: test (200; 3.260863ms) Mar 19 21:55:36.388: INFO: (8) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:460/proxy/: tls baz (200; 3.340026ms) Mar 19 21:55:36.389: INFO: (8) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname2/proxy/: bar (200; 4.3198ms) Mar 19 21:55:36.389: INFO: (8) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:1080/proxy/: ... (200; 4.44312ms) Mar 19 21:55:36.390: INFO: (8) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:462/proxy/: tls qux (200; 4.462132ms) Mar 19 21:55:36.390: INFO: (8) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:1080/proxy/: test<... (200; 4.557042ms) Mar 19 21:55:36.390: INFO: (8) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname2/proxy/: tls qux (200; 4.460648ms) Mar 19 21:55:36.390: INFO: (8) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:160/proxy/: foo (200; 4.555802ms) Mar 19 21:55:36.390: INFO: (8) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:160/proxy/: foo (200; 4.561515ms) Mar 19 21:55:36.390: INFO: (8) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname2/proxy/: bar (200; 4.698341ms) Mar 19 21:55:36.390: INFO: (8) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname1/proxy/: foo (200; 4.781745ms) Mar 19 21:55:36.390: INFO: (8) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname1/proxy/: tls baz (200; 4.699543ms) Mar 19 21:55:36.390: INFO: (8) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname1/proxy/: foo (200; 4.709873ms) Mar 19 21:55:36.390: INFO: (8) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:162/proxy/: bar (200; 4.93668ms) Mar 19 21:55:36.390: INFO: (8) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:443/proxy/: test<... (200; 2.114854ms) Mar 19 21:55:36.395: INFO: (9) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:160/proxy/: foo (200; 4.975329ms) Mar 19 21:55:36.396: INFO: (9) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:160/proxy/: foo (200; 5.263401ms) Mar 19 21:55:36.396: INFO: (9) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:462/proxy/: tls qux (200; 5.5138ms) Mar 19 21:55:36.396: INFO: (9) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6/proxy/: test (200; 5.482374ms) Mar 19 21:55:36.396: INFO: (9) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:162/proxy/: bar (200; 5.522684ms) Mar 19 21:55:36.396: INFO: (9) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:460/proxy/: tls baz (200; 5.699742ms) Mar 19 21:55:36.396: INFO: (9) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:162/proxy/: bar (200; 5.758695ms) Mar 19 21:55:36.396: INFO: (9) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:443/proxy/: ... (200; 5.74227ms) Mar 19 21:55:36.397: INFO: (9) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname1/proxy/: tls baz (200; 6.676322ms) Mar 19 21:55:36.398: INFO: (9) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname1/proxy/: foo (200; 7.201788ms) Mar 19 21:55:36.398: INFO: (9) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname2/proxy/: bar (200; 7.247674ms) Mar 19 21:55:36.398: INFO: (9) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname1/proxy/: foo (200; 7.328773ms) Mar 19 21:55:36.398: INFO: (9) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname2/proxy/: bar (200; 7.355335ms) Mar 19 21:55:36.398: INFO: (9) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname2/proxy/: tls qux (200; 7.563763ms) Mar 19 21:55:36.401: INFO: (10) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:443/proxy/: test<... (200; 3.51453ms) Mar 19 21:55:36.402: INFO: (10) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:462/proxy/: tls qux (200; 3.453954ms) Mar 19 21:55:36.402: INFO: (10) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:1080/proxy/: ... (200; 3.546824ms) Mar 19 21:55:36.402: INFO: (10) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname1/proxy/: tls baz (200; 4.314523ms) Mar 19 21:55:36.402: INFO: (10) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:160/proxy/: foo (200; 4.315842ms) Mar 19 21:55:36.402: INFO: (10) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6/proxy/: test (200; 4.339108ms) Mar 19 21:55:36.402: INFO: (10) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname2/proxy/: bar (200; 4.341077ms) Mar 19 21:55:36.402: INFO: (10) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname2/proxy/: tls qux (200; 4.440074ms) Mar 19 21:55:36.402: INFO: (10) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname1/proxy/: foo (200; 4.472665ms) Mar 19 21:55:36.403: INFO: (10) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname1/proxy/: foo (200; 4.635033ms) Mar 19 21:55:36.403: INFO: (10) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname2/proxy/: bar (200; 5.176229ms) Mar 19 21:55:36.407: INFO: (11) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:1080/proxy/: ... (200; 3.972723ms) Mar 19 21:55:36.408: INFO: (11) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:160/proxy/: foo (200; 4.857036ms) Mar 19 21:55:36.408: INFO: (11) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:1080/proxy/: test<... (200; 4.931155ms) Mar 19 21:55:36.408: INFO: (11) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname1/proxy/: foo (200; 5.03849ms) Mar 19 21:55:36.409: INFO: (11) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname2/proxy/: tls qux (200; 5.187449ms) Mar 19 21:55:36.409: INFO: (11) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname1/proxy/: tls baz (200; 5.312171ms) Mar 19 21:55:36.409: INFO: (11) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:160/proxy/: foo (200; 5.218602ms) Mar 19 21:55:36.409: INFO: (11) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname1/proxy/: foo (200; 5.241839ms) Mar 19 21:55:36.409: INFO: (11) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname2/proxy/: bar (200; 5.329448ms) Mar 19 21:55:36.409: INFO: (11) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:462/proxy/: tls qux (200; 5.237573ms) Mar 19 21:55:36.409: INFO: (11) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname2/proxy/: bar (200; 5.308935ms) Mar 19 21:55:36.409: INFO: (11) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:460/proxy/: tls baz (200; 5.310935ms) Mar 19 21:55:36.409: INFO: (11) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6/proxy/: test (200; 5.261954ms) Mar 19 21:55:36.409: INFO: (11) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:443/proxy/: test<... (200; 11.218082ms) Mar 19 21:55:36.420: INFO: (12) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6/proxy/: test (200; 11.291888ms) Mar 19 21:55:36.420: INFO: (12) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:1080/proxy/: ... (200; 11.343846ms) Mar 19 21:55:36.420: INFO: (12) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname2/proxy/: bar (200; 11.308029ms) Mar 19 21:55:36.420: INFO: (12) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname1/proxy/: foo (200; 11.307245ms) Mar 19 21:55:36.421: INFO: (12) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname1/proxy/: foo (200; 11.397087ms) Mar 19 21:55:36.421: INFO: (12) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:443/proxy/: test<... (200; 2.979272ms) Mar 19 21:55:36.425: INFO: (13) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:443/proxy/: test (200; 20.003255ms) Mar 19 21:55:36.441: INFO: (13) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:160/proxy/: foo (200; 20.042055ms) Mar 19 21:55:36.441: INFO: (13) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:460/proxy/: tls baz (200; 20.185142ms) Mar 19 21:55:36.441: INFO: (13) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname2/proxy/: bar (200; 20.13697ms) Mar 19 21:55:36.442: INFO: (13) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:462/proxy/: tls qux (200; 20.165309ms) Mar 19 21:55:36.442: INFO: (13) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:162/proxy/: bar (200; 20.484727ms) Mar 19 21:55:36.442: INFO: (13) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:1080/proxy/: ... (200; 20.724923ms) Mar 19 21:55:36.442: INFO: (13) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname2/proxy/: bar (200; 20.974725ms) Mar 19 21:55:36.442: INFO: (13) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname1/proxy/: foo (200; 20.994227ms) Mar 19 21:55:36.442: INFO: (13) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname2/proxy/: tls qux (200; 21.144797ms) Mar 19 21:55:36.443: INFO: (13) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname1/proxy/: tls baz (200; 20.472469ms) Mar 19 21:55:36.443: INFO: (13) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname1/proxy/: foo (200; 21.775816ms) Mar 19 21:55:36.464: INFO: (14) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:162/proxy/: bar (200; 21.291195ms) Mar 19 21:55:36.465: INFO: (14) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:460/proxy/: tls baz (200; 21.444076ms) Mar 19 21:55:36.466: INFO: (14) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:160/proxy/: foo (200; 22.939949ms) Mar 19 21:55:36.466: INFO: (14) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname2/proxy/: bar (200; 23.263363ms) Mar 19 21:55:36.466: INFO: (14) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:160/proxy/: foo (200; 23.265403ms) Mar 19 21:55:36.466: INFO: (14) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname2/proxy/: tls qux (200; 23.341899ms) Mar 19 21:55:36.466: INFO: (14) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname1/proxy/: foo (200; 23.274995ms) Mar 19 21:55:36.466: INFO: (14) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:1080/proxy/: ... (200; 23.357876ms) Mar 19 21:55:36.467: INFO: (14) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname1/proxy/: foo (200; 23.569782ms) Mar 19 21:55:36.467: INFO: (14) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:462/proxy/: tls qux (200; 23.735825ms) Mar 19 21:55:36.467: INFO: (14) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname2/proxy/: bar (200; 23.597077ms) Mar 19 21:55:36.467: INFO: (14) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:1080/proxy/: test<... (200; 23.79203ms) Mar 19 21:55:36.467: INFO: (14) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname1/proxy/: tls baz (200; 23.72609ms) Mar 19 21:55:36.467: INFO: (14) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:443/proxy/: test (200; 23.877903ms) Mar 19 21:55:36.467: INFO: (14) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:162/proxy/: bar (200; 23.941154ms) Mar 19 21:55:36.474: INFO: (15) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:160/proxy/: foo (200; 6.752503ms) Mar 19 21:55:36.474: INFO: (15) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:460/proxy/: tls baz (200; 6.837821ms) Mar 19 21:55:36.474: INFO: (15) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:162/proxy/: bar (200; 6.9826ms) Mar 19 21:55:36.474: INFO: (15) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:1080/proxy/: ... (200; 7.098111ms) Mar 19 21:55:36.474: INFO: (15) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:1080/proxy/: test<... (200; 6.951103ms) Mar 19 21:55:36.474: INFO: (15) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:160/proxy/: foo (200; 7.064618ms) Mar 19 21:55:36.474: INFO: (15) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:162/proxy/: bar (200; 7.098493ms) Mar 19 21:55:36.474: INFO: (15) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6/proxy/: test (200; 7.172602ms) Mar 19 21:55:36.474: INFO: (15) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:443/proxy/: test (200; 15.672254ms) Mar 19 21:55:36.497: INFO: (16) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname2/proxy/: bar (200; 15.716068ms) Mar 19 21:55:36.497: INFO: (16) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:160/proxy/: foo (200; 15.71284ms) Mar 19 21:55:36.497: INFO: (16) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:162/proxy/: bar (200; 15.743064ms) Mar 19 21:55:36.498: INFO: (16) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname1/proxy/: foo (200; 15.770828ms) Mar 19 21:55:36.498: INFO: (16) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname2/proxy/: bar (200; 15.851747ms) Mar 19 21:55:36.498: INFO: (16) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:162/proxy/: bar (200; 15.835911ms) Mar 19 21:55:36.498: INFO: (16) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:1080/proxy/: test<... (200; 16.550698ms) Mar 19 21:55:36.502: INFO: (16) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:462/proxy/: tls qux (200; 20.02436ms) Mar 19 21:55:36.502: INFO: (16) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname1/proxy/: foo (200; 20.466234ms) Mar 19 21:55:36.547: INFO: (16) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:1080/proxy/: ... (200; 65.457784ms) Mar 19 21:55:36.550: INFO: (17) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:160/proxy/: foo (200; 2.994794ms) Mar 19 21:55:36.551: INFO: (17) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6/proxy/: test (200; 3.454279ms) Mar 19 21:55:36.551: INFO: (17) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:1080/proxy/: ... (200; 3.438356ms) Mar 19 21:55:36.551: INFO: (17) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:443/proxy/: test<... (200; 3.879318ms) Mar 19 21:55:36.551: INFO: (17) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:162/proxy/: bar (200; 3.865338ms) Mar 19 21:55:36.552: INFO: (17) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname2/proxy/: bar (200; 5.069274ms) Mar 19 21:55:36.552: INFO: (17) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname1/proxy/: foo (200; 5.131919ms) Mar 19 21:55:36.552: INFO: (17) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname2/proxy/: tls qux (200; 5.067989ms) Mar 19 21:55:36.552: INFO: (17) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname1/proxy/: foo (200; 5.155854ms) Mar 19 21:55:36.552: INFO: (17) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname2/proxy/: bar (200; 5.16376ms) Mar 19 21:55:36.553: INFO: (17) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname1/proxy/: tls baz (200; 5.205701ms) Mar 19 21:55:36.555: INFO: (18) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6/proxy/: test (200; 2.488595ms) Mar 19 21:55:36.555: INFO: (18) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:162/proxy/: bar (200; 2.681397ms) Mar 19 21:55:36.555: INFO: (18) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:1080/proxy/: ... (200; 2.671183ms) Mar 19 21:55:36.557: INFO: (18) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname2/proxy/: bar (200; 4.300391ms) Mar 19 21:55:36.557: INFO: (18) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:1080/proxy/: test<... (200; 4.509155ms) Mar 19 21:55:36.557: INFO: (18) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname2/proxy/: tls qux (200; 4.582691ms) Mar 19 21:55:36.557: INFO: (18) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname2/proxy/: bar (200; 4.581555ms) Mar 19 21:55:36.557: INFO: (18) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6:160/proxy/: foo (200; 4.575956ms) Mar 19 21:55:36.557: INFO: (18) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname1/proxy/: foo (200; 4.802191ms) Mar 19 21:55:36.557: INFO: (18) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:162/proxy/: bar (200; 4.7935ms) Mar 19 21:55:36.557: INFO: (18) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname1/proxy/: foo (200; 4.777133ms) Mar 19 21:55:36.557: INFO: (18) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:443/proxy/: test<... (200; 4.542843ms) Mar 19 21:55:36.563: INFO: (19) /api/v1/namespaces/proxy-6671/pods/proxy-service-bchwl-46zz6/proxy/: test (200; 4.606427ms) Mar 19 21:55:36.563: INFO: (19) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:460/proxy/: tls baz (200; 4.615551ms) Mar 19 21:55:36.563: INFO: (19) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:1080/proxy/: ... (200; 4.649232ms) Mar 19 21:55:36.563: INFO: (19) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-bchwl-46zz6:162/proxy/: bar (200; 4.620569ms) Mar 19 21:55:36.563: INFO: (19) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-bchwl-46zz6:462/proxy/: tls qux (200; 4.638126ms) Mar 19 21:55:36.564: INFO: (19) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname2/proxy/: bar (200; 5.933758ms) Mar 19 21:55:36.564: INFO: (19) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname1/proxy/: foo (200; 6.082284ms) Mar 19 21:55:36.564: INFO: (19) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname1/proxy/: tls baz (200; 6.043214ms) Mar 19 21:55:36.564: INFO: (19) /api/v1/namespaces/proxy-6671/services/proxy-service-bchwl:portname2/proxy/: bar (200; 6.123383ms) Mar 19 21:55:36.564: INFO: (19) /api/v1/namespaces/proxy-6671/services/http:proxy-service-bchwl:portname1/proxy/: foo (200; 6.06647ms) Mar 19 21:55:36.564: INFO: (19) /api/v1/namespaces/proxy-6671/services/https:proxy-service-bchwl:tlsportname2/proxy/: tls qux (200; 6.105547ms) STEP: deleting ReplicationController proxy-service-bchwl in namespace proxy-6671, will wait for the garbage collector to delete the pods Mar 19 21:55:36.622: INFO: Deleting ReplicationController proxy-service-bchwl took: 6.18448ms Mar 19 21:55:36.923: INFO: Terminating ReplicationController proxy-service-bchwl pods took: 300.260768ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:55:49.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6671" for this suite. • [SLOW TEST:27.502 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":167,"skipped":2687,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:55:49.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 19 21:55:49.598: INFO: Waiting up to 5m0s for pod "downward-api-3bcf2dbf-f23b-4376-8426-4922fbb7bac7" in namespace "downward-api-4260" to be "success or failure" Mar 19 21:55:49.610: INFO: Pod "downward-api-3bcf2dbf-f23b-4376-8426-4922fbb7bac7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.708529ms Mar 19 21:55:51.613: INFO: Pod "downward-api-3bcf2dbf-f23b-4376-8426-4922fbb7bac7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015311309s Mar 19 21:55:54.962: INFO: Pod "downward-api-3bcf2dbf-f23b-4376-8426-4922fbb7bac7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.364259859s Mar 19 21:55:56.966: INFO: Pod "downward-api-3bcf2dbf-f23b-4376-8426-4922fbb7bac7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.368640952s STEP: Saw pod success Mar 19 21:55:56.967: INFO: Pod "downward-api-3bcf2dbf-f23b-4376-8426-4922fbb7bac7" satisfied condition "success or failure" Mar 19 21:55:56.970: INFO: Trying to get logs from node jerma-worker2 pod downward-api-3bcf2dbf-f23b-4376-8426-4922fbb7bac7 container dapi-container: STEP: delete the pod Mar 19 21:55:56.994: INFO: Waiting for pod downward-api-3bcf2dbf-f23b-4376-8426-4922fbb7bac7 to disappear Mar 19 21:55:57.005: INFO: Pod downward-api-3bcf2dbf-f23b-4376-8426-4922fbb7bac7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:55:57.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4260" for this suite. • [SLOW TEST:7.466 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2692,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:55:57.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-4715364b-243c-4c32-a480-fca779fbe78e STEP: Creating a pod to test consume configMaps Mar 19 21:55:57.157: INFO: Waiting up to 5m0s for pod "pod-configmaps-171aabba-5031-43ab-9446-f99aa16dbfec" in namespace "configmap-5174" to be "success or failure" Mar 19 21:55:57.201: INFO: Pod "pod-configmaps-171aabba-5031-43ab-9446-f99aa16dbfec": Phase="Pending", Reason="", readiness=false. Elapsed: 43.751774ms Mar 19 21:55:59.204: INFO: Pod "pod-configmaps-171aabba-5031-43ab-9446-f99aa16dbfec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047299385s Mar 19 21:56:01.211: INFO: Pod "pod-configmaps-171aabba-5031-43ab-9446-f99aa16dbfec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053825891s STEP: Saw pod success Mar 19 21:56:01.211: INFO: Pod "pod-configmaps-171aabba-5031-43ab-9446-f99aa16dbfec" satisfied condition "success or failure" Mar 19 21:56:01.213: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-171aabba-5031-43ab-9446-f99aa16dbfec container configmap-volume-test: STEP: delete the pod Mar 19 21:56:01.239: INFO: Waiting for pod pod-configmaps-171aabba-5031-43ab-9446-f99aa16dbfec to disappear Mar 19 21:56:01.254: INFO: Pod pod-configmaps-171aabba-5031-43ab-9446-f99aa16dbfec no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:56:01.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5174" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2700,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:56:01.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-8969bc64-bac5-4a8e-9636-80846d2c4805 in namespace container-probe-3504 Mar 19 21:56:05.338: INFO: Started pod liveness-8969bc64-bac5-4a8e-9636-80846d2c4805 in namespace container-probe-3504 STEP: checking the pod's current state and verifying that restartCount is present Mar 19 21:56:05.344: INFO: Initial restart count of pod liveness-8969bc64-bac5-4a8e-9636-80846d2c4805 is 0 Mar 19 21:56:19.372: INFO: Restart count of pod container-probe-3504/liveness-8969bc64-bac5-4a8e-9636-80846d2c4805 is now 1 (14.0280475s elapsed) Mar 19 21:56:39.806: INFO: Restart count of pod container-probe-3504/liveness-8969bc64-bac5-4a8e-9636-80846d2c4805 is now 2 (34.461426306s elapsed) Mar 19 21:56:57.849: INFO: Restart count of pod container-probe-3504/liveness-8969bc64-bac5-4a8e-9636-80846d2c4805 is now 3 (52.504777647s elapsed) Mar 19 21:57:17.892: INFO: Restart count of pod container-probe-3504/liveness-8969bc64-bac5-4a8e-9636-80846d2c4805 is now 4 (1m12.547406995s elapsed) Mar 19 21:58:24.086: INFO: Restart count of pod container-probe-3504/liveness-8969bc64-bac5-4a8e-9636-80846d2c4805 is now 5 (2m18.741458941s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:58:24.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3504" for this suite. • [SLOW TEST:142.861 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2708,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:58:24.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 19 21:58:31.737: INFO: 10 pods remaining Mar 19 21:58:31.737: INFO: 8 pods has nil DeletionTimestamp Mar 19 21:58:31.737: INFO: Mar 19 21:58:32.461: INFO: 0 pods remaining Mar 19 21:58:32.461: INFO: 0 pods has nil DeletionTimestamp Mar 19 21:58:32.461: INFO: STEP: Gathering metrics W0319 21:58:33.411335 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 19 21:58:33.411: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:58:33.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-859" for this suite. • [SLOW TEST:9.296 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":171,"skipped":2734,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:58:33.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 19 21:58:33.935: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f3425ac-dab7-48e6-8274-ddb51593cd3d" in namespace "downward-api-1180" to be "success or failure" Mar 19 21:58:33.970: INFO: Pod "downwardapi-volume-7f3425ac-dab7-48e6-8274-ddb51593cd3d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.531973ms Mar 19 21:58:35.974: INFO: Pod "downwardapi-volume-7f3425ac-dab7-48e6-8274-ddb51593cd3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038817988s Mar 19 21:58:37.994: INFO: Pod "downwardapi-volume-7f3425ac-dab7-48e6-8274-ddb51593cd3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058538174s STEP: Saw pod success Mar 19 21:58:37.994: INFO: Pod "downwardapi-volume-7f3425ac-dab7-48e6-8274-ddb51593cd3d" satisfied condition "success or failure" Mar 19 21:58:37.997: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-7f3425ac-dab7-48e6-8274-ddb51593cd3d container client-container: STEP: delete the pod Mar 19 21:58:38.027: INFO: Waiting for pod downwardapi-volume-7f3425ac-dab7-48e6-8274-ddb51593cd3d to disappear Mar 19 21:58:38.032: INFO: Pod downwardapi-volume-7f3425ac-dab7-48e6-8274-ddb51593cd3d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:58:38.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1180" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2737,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:58:38.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 19 21:58:38.842: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 19 21:58:40.862: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251918, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251918, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251918, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251918, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 21:58:43.899: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:58:43.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:58:45.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4400" for this suite. STEP: Destroying namespace "webhook-4400-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.065 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":173,"skipped":2750,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:58:45.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-2e791b85-eb4d-4e91-9fdf-227f5c294717 STEP: Creating a pod to test consume secrets Mar 19 21:58:45.473: INFO: Waiting up to 5m0s for pod "pod-secrets-96811cd8-71fe-45ff-9f9c-76468b7c013f" in namespace "secrets-3218" to be "success or failure" Mar 19 21:58:45.504: INFO: Pod "pod-secrets-96811cd8-71fe-45ff-9f9c-76468b7c013f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.542895ms Mar 19 21:58:47.509: INFO: Pod "pod-secrets-96811cd8-71fe-45ff-9f9c-76468b7c013f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036345969s Mar 19 21:58:49.520: INFO: Pod "pod-secrets-96811cd8-71fe-45ff-9f9c-76468b7c013f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046879319s STEP: Saw pod success Mar 19 21:58:49.520: INFO: Pod "pod-secrets-96811cd8-71fe-45ff-9f9c-76468b7c013f" satisfied condition "success or failure" Mar 19 21:58:49.523: INFO: Trying to get logs from node jerma-worker pod pod-secrets-96811cd8-71fe-45ff-9f9c-76468b7c013f container secret-volume-test: STEP: delete the pod Mar 19 21:58:49.551: INFO: Waiting for pod pod-secrets-96811cd8-71fe-45ff-9f9c-76468b7c013f to disappear Mar 19 21:58:49.563: INFO: Pod pod-secrets-96811cd8-71fe-45ff-9f9c-76468b7c013f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:58:49.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3218" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2752,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:58:49.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 19 21:58:49.642: INFO: Waiting up to 5m0s for pod "downward-api-be2abf65-e0ff-4087-875d-b8476c526039" in namespace "downward-api-3932" to be "success or failure" Mar 19 21:58:49.647: INFO: Pod "downward-api-be2abf65-e0ff-4087-875d-b8476c526039": Phase="Pending", Reason="", readiness=false. Elapsed: 5.066339ms Mar 19 21:58:51.668: INFO: Pod "downward-api-be2abf65-e0ff-4087-875d-b8476c526039": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026119303s Mar 19 21:58:53.673: INFO: Pod "downward-api-be2abf65-e0ff-4087-875d-b8476c526039": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030548692s STEP: Saw pod success Mar 19 21:58:53.673: INFO: Pod "downward-api-be2abf65-e0ff-4087-875d-b8476c526039" satisfied condition "success or failure" Mar 19 21:58:53.676: INFO: Trying to get logs from node jerma-worker2 pod downward-api-be2abf65-e0ff-4087-875d-b8476c526039 container dapi-container: STEP: delete the pod Mar 19 21:58:53.723: INFO: Waiting for pod downward-api-be2abf65-e0ff-4087-875d-b8476c526039 to disappear Mar 19 21:58:53.740: INFO: Pod downward-api-be2abf65-e0ff-4087-875d-b8476c526039 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:58:53.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3932" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2830,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:58:53.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 21:58:53.857: INFO: Creating deployment "test-recreate-deployment" Mar 19 21:58:53.860: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 19 21:58:53.871: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 19 21:58:55.879: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 19 21:58:55.881: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251933, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251933, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251933, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251933, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 19 21:58:57.885: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 19 21:58:57.892: INFO: Updating deployment test-recreate-deployment Mar 19 21:58:57.892: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 19 21:58:58.374: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3249 /apis/apps/v1/namespaces/deployment-3249/deployments/test-recreate-deployment b5c90e5d-77e9-437f-8512-0877c498463c 1126544 2 2020-03-19 21:58:53 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0043bed98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-19 21:58:58 +0000 UTC,LastTransitionTime:2020-03-19 21:58:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-19 21:58:58 +0000 UTC,LastTransitionTime:2020-03-19 21:58:53 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 19 21:58:58.384: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-3249 /apis/apps/v1/namespaces/deployment-3249/replicasets/test-recreate-deployment-5f94c574ff 97cf0826-ce99-46b4-960e-1444b0539fb0 1126541 1 2020-03-19 21:58:57 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment b5c90e5d-77e9-437f-8512-0877c498463c 0xc0043bf117 0xc0043bf118}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0043bf178 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 19 21:58:58.384: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 19 21:58:58.384: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-3249 /apis/apps/v1/namespaces/deployment-3249/replicasets/test-recreate-deployment-799c574856 deb293e3-f388-45ca-abeb-9e78e98298b2 1126533 2 2020-03-19 21:58:53 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment b5c90e5d-77e9-437f-8512-0877c498463c 0xc0043bf1e7 0xc0043bf1e8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0043bf258 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 19 21:58:58.408: INFO: Pod "test-recreate-deployment-5f94c574ff-qkgtk" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-qkgtk test-recreate-deployment-5f94c574ff- deployment-3249 /api/v1/namespaces/deployment-3249/pods/test-recreate-deployment-5f94c574ff-qkgtk 53407157-9d78-483a-9abb-a4a0df356448 1126545 0 2020-03-19 21:58:58 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 97cf0826-ce99-46b4-960e-1444b0539fb0 0xc004fba777 0xc004fba778}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rjptf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rjptf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rjptf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:58:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:58:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:58:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 21:58:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-19 21:58:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:58:58.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3249" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":176,"skipped":2876,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:58:58.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 19 21:58:58.552: INFO: Waiting up to 5m0s for pod "pod-8bbc0bdf-b7b1-4fe2-adee-f97aa6423439" in namespace "emptydir-6042" to be "success or failure" Mar 19 21:58:58.743: INFO: Pod "pod-8bbc0bdf-b7b1-4fe2-adee-f97aa6423439": Phase="Pending", Reason="", readiness=false. Elapsed: 190.650195ms Mar 19 21:59:00.756: INFO: Pod "pod-8bbc0bdf-b7b1-4fe2-adee-f97aa6423439": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20396129s Mar 19 21:59:02.766: INFO: Pod "pod-8bbc0bdf-b7b1-4fe2-adee-f97aa6423439": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.214084906s STEP: Saw pod success Mar 19 21:59:02.766: INFO: Pod "pod-8bbc0bdf-b7b1-4fe2-adee-f97aa6423439" satisfied condition "success or failure" Mar 19 21:59:02.769: INFO: Trying to get logs from node jerma-worker pod pod-8bbc0bdf-b7b1-4fe2-adee-f97aa6423439 container test-container: STEP: delete the pod Mar 19 21:59:02.802: INFO: Waiting for pod pod-8bbc0bdf-b7b1-4fe2-adee-f97aa6423439 to disappear Mar 19 21:59:02.815: INFO: Pod pod-8bbc0bdf-b7b1-4fe2-adee-f97aa6423439 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:59:02.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6042" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2910,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:59:02.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-804eb20c-1f32-47ad-9cad-af61b32df01b in namespace container-probe-3337 Mar 19 21:59:06.914: INFO: Started pod liveness-804eb20c-1f32-47ad-9cad-af61b32df01b in namespace container-probe-3337 STEP: checking the pod's current state and verifying that restartCount is present Mar 19 21:59:06.918: INFO: Initial restart count of pod liveness-804eb20c-1f32-47ad-9cad-af61b32df01b is 0 Mar 19 21:59:27.152: INFO: Restart count of pod container-probe-3337/liveness-804eb20c-1f32-47ad-9cad-af61b32df01b is now 1 (20.234408758s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:59:27.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3337" for this suite. • [SLOW TEST:24.428 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2915,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:59:27.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-d97b4ca4-85a9-4240-aa1e-15733092184b STEP: Creating secret with name secret-projected-all-test-volume-6115b744-4cda-4cd3-85d3-243f2449c763 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 19 21:59:27.329: INFO: Waiting up to 5m0s for pod "projected-volume-f3989193-d13b-43d1-af18-5e6385ac9efe" in namespace "projected-1207" to be "success or failure" Mar 19 21:59:27.359: INFO: Pod "projected-volume-f3989193-d13b-43d1-af18-5e6385ac9efe": Phase="Pending", Reason="", readiness=false. Elapsed: 30.285362ms Mar 19 21:59:29.362: INFO: Pod "projected-volume-f3989193-d13b-43d1-af18-5e6385ac9efe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03319164s Mar 19 21:59:31.527: INFO: Pod "projected-volume-f3989193-d13b-43d1-af18-5e6385ac9efe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.198672614s STEP: Saw pod success Mar 19 21:59:31.527: INFO: Pod "projected-volume-f3989193-d13b-43d1-af18-5e6385ac9efe" satisfied condition "success or failure" Mar 19 21:59:31.564: INFO: Trying to get logs from node jerma-worker pod projected-volume-f3989193-d13b-43d1-af18-5e6385ac9efe container projected-all-volume-test: STEP: delete the pod Mar 19 21:59:31.626: INFO: Waiting for pod projected-volume-f3989193-d13b-43d1-af18-5e6385ac9efe to disappear Mar 19 21:59:31.646: INFO: Pod projected-volume-f3989193-d13b-43d1-af18-5e6385ac9efe no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:59:31.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1207" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2935,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:59:31.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 19 21:59:32.600: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 19 21:59:34.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251972, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251972, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251972, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720251972, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 21:59:37.665: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 21:59:49.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8588" for this suite. STEP: Destroying namespace "webhook-8588-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.246 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":180,"skipped":2937,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 21:59:49.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0319 22:00:20.513016 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 19 22:00:20.513: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:00:20.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3102" for this suite. • [SLOW TEST:30.621 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":181,"skipped":2949,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:00:20.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1424 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-1424 I0319 22:00:20.713757 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-1424, replica count: 2 I0319 22:00:23.764188 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0319 22:00:26.764421 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 19 22:00:26.764: INFO: Creating new exec pod Mar 19 22:00:31.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1424 execpods4pgj -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 19 22:00:34.529: INFO: stderr: "I0319 22:00:34.430887 3101 log.go:172] (0xc0005f2000) (0xc000234000) Create stream\nI0319 22:00:34.430927 3101 log.go:172] (0xc0005f2000) (0xc000234000) Stream added, broadcasting: 1\nI0319 22:00:34.433693 3101 log.go:172] (0xc0005f2000) Reply frame received for 1\nI0319 22:00:34.433734 3101 log.go:172] (0xc0005f2000) (0xc000284000) Create stream\nI0319 22:00:34.433746 3101 log.go:172] (0xc0005f2000) (0xc000284000) Stream added, broadcasting: 3\nI0319 22:00:34.434694 3101 log.go:172] (0xc0005f2000) Reply frame received for 3\nI0319 22:00:34.434718 3101 log.go:172] (0xc0005f2000) (0xc0002840a0) Create stream\nI0319 22:00:34.434726 3101 log.go:172] (0xc0005f2000) (0xc0002840a0) Stream added, broadcasting: 5\nI0319 22:00:34.435550 3101 log.go:172] (0xc0005f2000) Reply frame received for 5\nI0319 22:00:34.521574 3101 log.go:172] (0xc0005f2000) Data frame received for 5\nI0319 22:00:34.521617 3101 log.go:172] (0xc0002840a0) (5) Data frame handling\nI0319 22:00:34.521642 3101 log.go:172] (0xc0002840a0) (5) Data frame sent\nI0319 22:00:34.521664 3101 log.go:172] (0xc0005f2000) Data frame received for 5\n+ nc -zv -t -w 2 externalname-service 80\nI0319 22:00:34.521676 3101 log.go:172] (0xc0002840a0) (5) Data frame handling\nI0319 22:00:34.521736 3101 log.go:172] (0xc0002840a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0319 22:00:34.522139 3101 log.go:172] (0xc0005f2000) Data frame received for 3\nI0319 22:00:34.522177 3101 log.go:172] (0xc000284000) (3) Data frame handling\nI0319 22:00:34.522364 3101 log.go:172] (0xc0005f2000) Data frame received for 5\nI0319 22:00:34.522390 3101 log.go:172] (0xc0002840a0) (5) Data frame handling\nI0319 22:00:34.524255 3101 log.go:172] (0xc0005f2000) Data frame received for 1\nI0319 22:00:34.524299 3101 log.go:172] (0xc000234000) (1) Data frame handling\nI0319 22:00:34.524328 3101 log.go:172] (0xc000234000) (1) Data frame sent\nI0319 22:00:34.524362 3101 log.go:172] (0xc0005f2000) (0xc000234000) Stream removed, broadcasting: 1\nI0319 22:00:34.524404 3101 log.go:172] (0xc0005f2000) Go away received\nI0319 22:00:34.524672 3101 log.go:172] (0xc0005f2000) (0xc000234000) Stream removed, broadcasting: 1\nI0319 22:00:34.524691 3101 log.go:172] (0xc0005f2000) (0xc000284000) Stream removed, broadcasting: 3\nI0319 22:00:34.524702 3101 log.go:172] (0xc0005f2000) (0xc0002840a0) Stream removed, broadcasting: 5\n" Mar 19 22:00:34.529: INFO: stdout: "" Mar 19 22:00:34.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1424 execpods4pgj -- /bin/sh -x -c nc -zv -t -w 2 10.104.100.140 80' Mar 19 22:00:34.741: INFO: stderr: "I0319 22:00:34.661106 3137 log.go:172] (0xc00053e2c0) (0xc000701ae0) Create stream\nI0319 22:00:34.661296 3137 log.go:172] (0xc00053e2c0) (0xc000701ae0) Stream added, broadcasting: 1\nI0319 22:00:34.664293 3137 log.go:172] (0xc00053e2c0) Reply frame received for 1\nI0319 22:00:34.664337 3137 log.go:172] (0xc00053e2c0) (0xc000701cc0) Create stream\nI0319 22:00:34.664356 3137 log.go:172] (0xc00053e2c0) (0xc000701cc0) Stream added, broadcasting: 3\nI0319 22:00:34.665464 3137 log.go:172] (0xc00053e2c0) Reply frame received for 3\nI0319 22:00:34.665493 3137 log.go:172] (0xc00053e2c0) (0xc000982000) Create stream\nI0319 22:00:34.665501 3137 log.go:172] (0xc00053e2c0) (0xc000982000) Stream added, broadcasting: 5\nI0319 22:00:34.666371 3137 log.go:172] (0xc00053e2c0) Reply frame received for 5\nI0319 22:00:34.735659 3137 log.go:172] (0xc00053e2c0) Data frame received for 5\nI0319 22:00:34.735691 3137 log.go:172] (0xc000982000) (5) Data frame handling\nI0319 22:00:34.735708 3137 log.go:172] (0xc000982000) (5) Data frame sent\n+ nc -zv -t -w 2 10.104.100.140 80\nConnection to 10.104.100.140 80 port [tcp/http] succeeded!\nI0319 22:00:34.735753 3137 log.go:172] (0xc00053e2c0) Data frame received for 3\nI0319 22:00:34.735807 3137 log.go:172] (0xc000701cc0) (3) Data frame handling\nI0319 22:00:34.735835 3137 log.go:172] (0xc00053e2c0) Data frame received for 5\nI0319 22:00:34.735851 3137 log.go:172] (0xc000982000) (5) Data frame handling\nI0319 22:00:34.737853 3137 log.go:172] (0xc00053e2c0) Data frame received for 1\nI0319 22:00:34.737885 3137 log.go:172] (0xc000701ae0) (1) Data frame handling\nI0319 22:00:34.737905 3137 log.go:172] (0xc000701ae0) (1) Data frame sent\nI0319 22:00:34.737944 3137 log.go:172] (0xc00053e2c0) (0xc000701ae0) Stream removed, broadcasting: 1\nI0319 22:00:34.737967 3137 log.go:172] (0xc00053e2c0) Go away received\nI0319 22:00:34.738287 3137 log.go:172] (0xc00053e2c0) (0xc000701ae0) Stream removed, broadcasting: 1\nI0319 22:00:34.738309 3137 log.go:172] (0xc00053e2c0) (0xc000701cc0) Stream removed, broadcasting: 3\nI0319 22:00:34.738324 3137 log.go:172] (0xc00053e2c0) (0xc000982000) Stream removed, broadcasting: 5\n" Mar 19 22:00:34.742: INFO: stdout: "" Mar 19 22:00:34.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1424 execpods4pgj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32058' Mar 19 22:00:34.944: INFO: stderr: "I0319 22:00:34.867004 3159 log.go:172] (0xc0009e0dc0) (0xc000958320) Create stream\nI0319 22:00:34.867057 3159 log.go:172] (0xc0009e0dc0) (0xc000958320) Stream added, broadcasting: 1\nI0319 22:00:34.869760 3159 log.go:172] (0xc0009e0dc0) Reply frame received for 1\nI0319 22:00:34.869803 3159 log.go:172] (0xc0009e0dc0) (0xc0009b2000) Create stream\nI0319 22:00:34.869813 3159 log.go:172] (0xc0009e0dc0) (0xc0009b2000) Stream added, broadcasting: 3\nI0319 22:00:34.870666 3159 log.go:172] (0xc0009e0dc0) Reply frame received for 3\nI0319 22:00:34.870693 3159 log.go:172] (0xc0009e0dc0) (0xc0009583c0) Create stream\nI0319 22:00:34.870702 3159 log.go:172] (0xc0009e0dc0) (0xc0009583c0) Stream added, broadcasting: 5\nI0319 22:00:34.871654 3159 log.go:172] (0xc0009e0dc0) Reply frame received for 5\nI0319 22:00:34.936561 3159 log.go:172] (0xc0009e0dc0) Data frame received for 5\nI0319 22:00:34.936607 3159 log.go:172] (0xc0009583c0) (5) Data frame handling\nI0319 22:00:34.936644 3159 log.go:172] (0xc0009583c0) (5) Data frame sent\nI0319 22:00:34.936664 3159 log.go:172] (0xc0009e0dc0) Data frame received for 5\nI0319 22:00:34.936681 3159 log.go:172] (0xc0009583c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 32058\nConnection to 172.17.0.10 32058 port [tcp/32058] succeeded!\nI0319 22:00:34.936840 3159 log.go:172] (0xc0009583c0) (5) Data frame sent\nI0319 22:00:34.936873 3159 log.go:172] (0xc0009e0dc0) Data frame received for 5\nI0319 22:00:34.936885 3159 log.go:172] (0xc0009583c0) (5) Data frame handling\nI0319 22:00:34.937055 3159 log.go:172] (0xc0009e0dc0) Data frame received for 3\nI0319 22:00:34.937097 3159 log.go:172] (0xc0009b2000) (3) Data frame handling\nI0319 22:00:34.938755 3159 log.go:172] (0xc0009e0dc0) Data frame received for 1\nI0319 22:00:34.938779 3159 log.go:172] (0xc000958320) (1) Data frame handling\nI0319 22:00:34.938811 3159 log.go:172] (0xc000958320) (1) Data frame sent\nI0319 22:00:34.938905 3159 log.go:172] (0xc0009e0dc0) (0xc000958320) Stream removed, broadcasting: 1\nI0319 22:00:34.939004 3159 log.go:172] (0xc0009e0dc0) Go away received\nI0319 22:00:34.939371 3159 log.go:172] (0xc0009e0dc0) (0xc000958320) Stream removed, broadcasting: 1\nI0319 22:00:34.939398 3159 log.go:172] (0xc0009e0dc0) (0xc0009b2000) Stream removed, broadcasting: 3\nI0319 22:00:34.939410 3159 log.go:172] (0xc0009e0dc0) (0xc0009583c0) Stream removed, broadcasting: 5\n" Mar 19 22:00:34.944: INFO: stdout: "" Mar 19 22:00:34.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1424 execpods4pgj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32058' Mar 19 22:00:35.161: INFO: stderr: "I0319 22:00:35.071939 3179 log.go:172] (0xc000a169a0) (0xc0006d3d60) Create stream\nI0319 22:00:35.071982 3179 log.go:172] (0xc000a169a0) (0xc0006d3d60) Stream added, broadcasting: 1\nI0319 22:00:35.074752 3179 log.go:172] (0xc000a169a0) Reply frame received for 1\nI0319 22:00:35.074833 3179 log.go:172] (0xc000a169a0) (0xc000a90000) Create stream\nI0319 22:00:35.074868 3179 log.go:172] (0xc000a169a0) (0xc000a90000) Stream added, broadcasting: 3\nI0319 22:00:35.075722 3179 log.go:172] (0xc000a169a0) Reply frame received for 3\nI0319 22:00:35.075744 3179 log.go:172] (0xc000a169a0) (0xc0003154a0) Create stream\nI0319 22:00:35.075751 3179 log.go:172] (0xc000a169a0) (0xc0003154a0) Stream added, broadcasting: 5\nI0319 22:00:35.076553 3179 log.go:172] (0xc000a169a0) Reply frame received for 5\nI0319 22:00:35.153367 3179 log.go:172] (0xc000a169a0) Data frame received for 5\nI0319 22:00:35.153402 3179 log.go:172] (0xc0003154a0) (5) Data frame handling\nI0319 22:00:35.153419 3179 log.go:172] (0xc0003154a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 32058\nI0319 22:00:35.155078 3179 log.go:172] (0xc000a169a0) Data frame received for 5\nI0319 22:00:35.155111 3179 log.go:172] (0xc0003154a0) (5) Data frame handling\nI0319 22:00:35.155139 3179 log.go:172] (0xc0003154a0) (5) Data frame sent\nConnection to 172.17.0.8 32058 port [tcp/32058] succeeded!\nI0319 22:00:35.155412 3179 log.go:172] (0xc000a169a0) Data frame received for 3\nI0319 22:00:35.155440 3179 log.go:172] (0xc000a90000) (3) Data frame handling\nI0319 22:00:35.155627 3179 log.go:172] (0xc000a169a0) Data frame received for 5\nI0319 22:00:35.155649 3179 log.go:172] (0xc0003154a0) (5) Data frame handling\nI0319 22:00:35.157581 3179 log.go:172] (0xc000a169a0) Data frame received for 1\nI0319 22:00:35.157595 3179 log.go:172] (0xc0006d3d60) (1) Data frame handling\nI0319 22:00:35.157603 3179 log.go:172] (0xc0006d3d60) (1) Data frame sent\nI0319 22:00:35.157612 3179 log.go:172] (0xc000a169a0) (0xc0006d3d60) Stream removed, broadcasting: 1\nI0319 22:00:35.157688 3179 log.go:172] (0xc000a169a0) Go away received\nI0319 22:00:35.157951 3179 log.go:172] (0xc000a169a0) (0xc0006d3d60) Stream removed, broadcasting: 1\nI0319 22:00:35.157971 3179 log.go:172] (0xc000a169a0) (0xc000a90000) Stream removed, broadcasting: 3\nI0319 22:00:35.157981 3179 log.go:172] (0xc000a169a0) (0xc0003154a0) Stream removed, broadcasting: 5\n" Mar 19 22:00:35.161: INFO: stdout: "" Mar 19 22:00:35.161: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:00:35.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1424" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.803 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":182,"skipped":2956,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:00:35.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 19 22:00:39.930: INFO: Successfully updated pod "annotationupdatee8e69b2b-bd01-41c8-87fc-a958026bcd2b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:00:41.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8760" for this suite. • [SLOW TEST:6.657 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":2958,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:00:41.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 22:00:42.074: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-7ca35624-2b6f-4c7f-a3fc-eeb379a7b75d" in namespace "security-context-test-970" to be "success or failure" Mar 19 22:00:42.091: INFO: Pod "alpine-nnp-false-7ca35624-2b6f-4c7f-a3fc-eeb379a7b75d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.926702ms Mar 19 22:00:44.109: INFO: Pod "alpine-nnp-false-7ca35624-2b6f-4c7f-a3fc-eeb379a7b75d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03425528s Mar 19 22:00:46.113: INFO: Pod "alpine-nnp-false-7ca35624-2b6f-4c7f-a3fc-eeb379a7b75d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038361349s Mar 19 22:00:46.113: INFO: Pod "alpine-nnp-false-7ca35624-2b6f-4c7f-a3fc-eeb379a7b75d" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:00:46.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-970" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2971,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:00:46.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:00:46.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3353" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":185,"skipped":2983,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:00:46.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-327c75d5-bf58-4538-ae0c-c7dd1be710d3 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-327c75d5-bf58-4538-ae0c-c7dd1be710d3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:00:52.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5430" for this suite. • [SLOW TEST:6.129 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3031,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:00:52.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 19 22:00:57.012: INFO: Successfully updated pod "labelsupdate7f45b490-3d62-4033-82e3-966a3d2a6407" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:00:59.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1523" for this suite. • [SLOW TEST:6.793 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3039,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:00:59.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 22:00:59.264: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 19 22:01:04.267: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 19 22:01:04.267: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 19 22:01:06.271: INFO: Creating deployment "test-rollover-deployment" Mar 19 22:01:06.286: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 19 22:01:08.301: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 19 22:01:08.307: INFO: Ensure that both replica sets have 1 created replica Mar 19 22:01:08.312: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 19 22:01:08.319: INFO: Updating deployment test-rollover-deployment Mar 19 22:01:08.319: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 19 22:01:10.334: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 19 22:01:10.340: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 19 22:01:10.346: INFO: all replica sets need to contain the pod-template-hash label Mar 19 22:01:10.346: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252066, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252066, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252068, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252066, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 19 22:01:12.354: INFO: all replica sets need to contain the pod-template-hash label Mar 19 22:01:12.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252066, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252066, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252070, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252066, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 19 22:01:14.353: INFO: all replica sets need to contain the pod-template-hash label Mar 19 22:01:14.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252066, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252066, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252070, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252066, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 19 22:01:16.353: INFO: all replica sets need to contain the pod-template-hash label Mar 19 22:01:16.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252066, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252066, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252070, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252066, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 19 22:01:18.352: INFO: all replica sets need to contain the pod-template-hash label Mar 19 22:01:18.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252066, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252066, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252070, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252066, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 19 22:01:20.353: INFO: all replica sets need to contain the pod-template-hash label Mar 19 22:01:20.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252066, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252066, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252070, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252066, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 19 22:01:22.354: INFO: Mar 19 22:01:22.354: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 19 22:01:22.363: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-8072 /apis/apps/v1/namespaces/deployment-8072/deployments/test-rollover-deployment b6abce2f-7d2a-456c-b174-70697a27684b 1127482 2 2020-03-19 22:01:06 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005154148 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-19 22:01:06 +0000 UTC,LastTransitionTime:2020-03-19 22:01:06 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-03-19 22:01:21 +0000 UTC,LastTransitionTime:2020-03-19 22:01:06 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 19 22:01:22.367: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-8072 /apis/apps/v1/namespaces/deployment-8072/replicasets/test-rollover-deployment-574d6dfbff 8eb5fa1f-47f4-4b89-ab50-814dc99cd165 1127471 2 2020-03-19 22:01:08 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment b6abce2f-7d2a-456c-b174-70697a27684b 0xc00519b4a7 0xc00519b4a8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00519b518 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 19 22:01:22.367: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 19 22:01:22.367: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-8072 /apis/apps/v1/namespaces/deployment-8072/replicasets/test-rollover-controller 11a0973c-0e84-4e8b-9f24-f18274d10400 1127481 2 2020-03-19 22:00:59 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment b6abce2f-7d2a-456c-b174-70697a27684b 0xc00519b3c7 0xc00519b3c8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00519b428 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 19 22:01:22.367: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-8072 /apis/apps/v1/namespaces/deployment-8072/replicasets/test-rollover-deployment-f6c94f66c 55508f86-2958-4052-b5ab-3bbe37b32af2 1127414 2 2020-03-19 22:01:06 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment b6abce2f-7d2a-456c-b174-70697a27684b 0xc00519b580 0xc00519b581}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00519b5f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 19 22:01:22.371: INFO: Pod "test-rollover-deployment-574d6dfbff-65lrm" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-65lrm test-rollover-deployment-574d6dfbff- deployment-8072 /api/v1/namespaces/deployment-8072/pods/test-rollover-deployment-574d6dfbff-65lrm f0d17c56-154f-474d-a7ef-f9d453af87f6 1127439 0 2020-03-19 22:01:08 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 8eb5fa1f-47f4-4b89-ab50-814dc99cd165 0xc00519bb17 0xc00519bb18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8r5r9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8r5r9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8r5r9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 22:01:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 22:01:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 22:01:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-19 22:01:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.75,StartTime:2020-03-19 22:01:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-19 22:01:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://d3556a900e983af3c93db67a57cbd65efbcf4a7bb5a6da99474462477d7bfc79,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.75,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:01:22.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8072" for this suite. • [SLOW TEST:23.226 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":188,"skipped":3047,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:01:22.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5717 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5717 STEP: creating replication controller externalsvc in namespace services-5717 I0319 22:01:22.591891 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5717, replica count: 2 I0319 22:01:25.642310 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0319 22:01:28.642544 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 19 22:01:28.735: INFO: Creating new exec pod Mar 19 22:01:32.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5717 execpodczrpf -- /bin/sh -x -c nslookup nodeport-service' Mar 19 22:01:33.006: INFO: stderr: "I0319 22:01:32.898606 3203 log.go:172] (0xc000a7c790) (0xc000a3e460) Create stream\nI0319 22:01:32.898668 3203 log.go:172] (0xc000a7c790) (0xc000a3e460) Stream added, broadcasting: 1\nI0319 22:01:32.903040 3203 log.go:172] (0xc000a7c790) Reply frame received for 1\nI0319 22:01:32.903092 3203 log.go:172] (0xc000a7c790) (0xc000259400) Create stream\nI0319 22:01:32.903110 3203 log.go:172] (0xc000a7c790) (0xc000259400) Stream added, broadcasting: 3\nI0319 22:01:32.904268 3203 log.go:172] (0xc000a7c790) Reply frame received for 3\nI0319 22:01:32.904319 3203 log.go:172] (0xc000a7c790) (0xc000900000) Create stream\nI0319 22:01:32.904338 3203 log.go:172] (0xc000a7c790) (0xc000900000) Stream added, broadcasting: 5\nI0319 22:01:32.905430 3203 log.go:172] (0xc000a7c790) Reply frame received for 5\nI0319 22:01:32.987553 3203 log.go:172] (0xc000a7c790) Data frame received for 5\nI0319 22:01:32.987597 3203 log.go:172] (0xc000900000) (5) Data frame handling\nI0319 22:01:32.987634 3203 log.go:172] (0xc000900000) (5) Data frame sent\n+ nslookup nodeport-service\nI0319 22:01:32.999042 3203 log.go:172] (0xc000a7c790) Data frame received for 3\nI0319 22:01:32.999073 3203 log.go:172] (0xc000259400) (3) Data frame handling\nI0319 22:01:32.999102 3203 log.go:172] (0xc000259400) (3) Data frame sent\nI0319 22:01:33.000068 3203 log.go:172] (0xc000a7c790) Data frame received for 3\nI0319 22:01:33.000097 3203 log.go:172] (0xc000259400) (3) Data frame handling\nI0319 22:01:33.000141 3203 log.go:172] (0xc000259400) (3) Data frame sent\nI0319 22:01:33.000573 3203 log.go:172] (0xc000a7c790) Data frame received for 3\nI0319 22:01:33.000609 3203 log.go:172] (0xc000259400) (3) Data frame handling\nI0319 22:01:33.000700 3203 log.go:172] (0xc000a7c790) Data frame received for 5\nI0319 22:01:33.000722 3203 log.go:172] (0xc000900000) (5) Data frame handling\nI0319 22:01:33.003198 3203 log.go:172] (0xc000a7c790) Data frame received for 1\nI0319 22:01:33.003225 3203 log.go:172] (0xc000a3e460) (1) Data frame handling\nI0319 22:01:33.003235 3203 log.go:172] (0xc000a3e460) (1) Data frame sent\nI0319 22:01:33.003246 3203 log.go:172] (0xc000a7c790) (0xc000a3e460) Stream removed, broadcasting: 1\nI0319 22:01:33.003265 3203 log.go:172] (0xc000a7c790) Go away received\nI0319 22:01:33.003673 3203 log.go:172] (0xc000a7c790) (0xc000a3e460) Stream removed, broadcasting: 1\nI0319 22:01:33.003690 3203 log.go:172] (0xc000a7c790) (0xc000259400) Stream removed, broadcasting: 3\nI0319 22:01:33.003701 3203 log.go:172] (0xc000a7c790) (0xc000900000) Stream removed, broadcasting: 5\n" Mar 19 22:01:33.007: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5717.svc.cluster.local\tcanonical name = externalsvc.services-5717.svc.cluster.local.\nName:\texternalsvc.services-5717.svc.cluster.local\nAddress: 10.111.32.40\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5717, will wait for the garbage collector to delete the pods Mar 19 22:01:33.075: INFO: Deleting ReplicationController externalsvc took: 5.610657ms Mar 19 22:01:33.375: INFO: Terminating ReplicationController externalsvc pods took: 300.268409ms Mar 19 22:01:49.319: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:01:49.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5717" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:27.000 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":189,"skipped":3052,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:01:49.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-c45c69ec-0c94-4d76-a316-10890f8568ca STEP: Creating a pod to test consume secrets Mar 19 22:01:49.481: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f75e39e4-5997-4823-9ad9-3626328b15f5" in namespace "projected-6171" to be "success or failure" Mar 19 22:01:49.484: INFO: Pod "pod-projected-secrets-f75e39e4-5997-4823-9ad9-3626328b15f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.671618ms Mar 19 22:01:51.488: INFO: Pod "pod-projected-secrets-f75e39e4-5997-4823-9ad9-3626328b15f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007179566s Mar 19 22:01:53.492: INFO: Pod "pod-projected-secrets-f75e39e4-5997-4823-9ad9-3626328b15f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010691098s STEP: Saw pod success Mar 19 22:01:53.492: INFO: Pod "pod-projected-secrets-f75e39e4-5997-4823-9ad9-3626328b15f5" satisfied condition "success or failure" Mar 19 22:01:53.494: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-f75e39e4-5997-4823-9ad9-3626328b15f5 container projected-secret-volume-test: STEP: delete the pod Mar 19 22:01:53.509: INFO: Waiting for pod pod-projected-secrets-f75e39e4-5997-4823-9ad9-3626328b15f5 to disappear Mar 19 22:01:53.514: INFO: Pod pod-projected-secrets-f75e39e4-5997-4823-9ad9-3626328b15f5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:01:53.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6171" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3052,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:01:53.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 22:01:53.692: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"a917a971-3f4b-4928-951d-ea01f0fccb63", Controller:(*bool)(0xc0052f313a), BlockOwnerDeletion:(*bool)(0xc0052f313b)}} Mar 19 22:01:53.707: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c432a8fd-d1ce-4c9a-bc9b-d41ed40b4e8d", Controller:(*bool)(0xc0052f32f2), BlockOwnerDeletion:(*bool)(0xc0052f32f3)}} Mar 19 22:01:53.750: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"156c3b16-7abb-4f71-b908-cbb97c476e5f", Controller:(*bool)(0xc0052f349a), BlockOwnerDeletion:(*bool)(0xc0052f349b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:01:58.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5428" for this suite. • [SLOW TEST:5.285 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":191,"skipped":3082,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:01:58.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-b66b7834-691f-4946-bcd1-d086eef0694a STEP: Creating a pod to test consume configMaps Mar 19 22:01:58.946: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-de5a3339-87fd-44d7-baf2-3a0da587e9d8" in namespace "projected-4336" to be "success or failure" Mar 19 22:01:58.950: INFO: Pod "pod-projected-configmaps-de5a3339-87fd-44d7-baf2-3a0da587e9d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089425ms Mar 19 22:02:00.966: INFO: Pod "pod-projected-configmaps-de5a3339-87fd-44d7-baf2-3a0da587e9d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020025627s Mar 19 22:02:02.971: INFO: Pod "pod-projected-configmaps-de5a3339-87fd-44d7-baf2-3a0da587e9d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024383692s STEP: Saw pod success Mar 19 22:02:02.971: INFO: Pod "pod-projected-configmaps-de5a3339-87fd-44d7-baf2-3a0da587e9d8" satisfied condition "success or failure" Mar 19 22:02:02.974: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-de5a3339-87fd-44d7-baf2-3a0da587e9d8 container projected-configmap-volume-test: STEP: delete the pod Mar 19 22:02:03.007: INFO: Waiting for pod pod-projected-configmaps-de5a3339-87fd-44d7-baf2-3a0da587e9d8 to disappear Mar 19 22:02:03.056: INFO: Pod pod-projected-configmaps-de5a3339-87fd-44d7-baf2-3a0da587e9d8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:02:03.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4336" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:02:03.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Mar 19 22:02:03.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6683' Mar 19 22:02:03.414: INFO: stderr: "" Mar 19 22:02:03.414: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 19 22:02:03.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6683' Mar 19 22:02:03.535: INFO: stderr: "" Mar 19 22:02:03.535: INFO: stdout: "update-demo-nautilus-cqq75 update-demo-nautilus-nkg97 " Mar 19 22:02:03.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqq75 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6683' Mar 19 22:02:03.619: INFO: stderr: "" Mar 19 22:02:03.619: INFO: stdout: "" Mar 19 22:02:03.619: INFO: update-demo-nautilus-cqq75 is created but not running Mar 19 22:02:08.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6683' Mar 19 22:02:08.727: INFO: stderr: "" Mar 19 22:02:08.727: INFO: stdout: "update-demo-nautilus-cqq75 update-demo-nautilus-nkg97 " Mar 19 22:02:08.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqq75 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6683' Mar 19 22:02:08.823: INFO: stderr: "" Mar 19 22:02:08.823: INFO: stdout: "true" Mar 19 22:02:08.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqq75 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6683' Mar 19 22:02:08.921: INFO: stderr: "" Mar 19 22:02:08.921: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 19 22:02:08.921: INFO: validating pod update-demo-nautilus-cqq75 Mar 19 22:02:08.925: INFO: got data: { "image": "nautilus.jpg" } Mar 19 22:02:08.925: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 19 22:02:08.925: INFO: update-demo-nautilus-cqq75 is verified up and running Mar 19 22:02:08.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkg97 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6683' Mar 19 22:02:09.032: INFO: stderr: "" Mar 19 22:02:09.033: INFO: stdout: "true" Mar 19 22:02:09.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkg97 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6683' Mar 19 22:02:09.123: INFO: stderr: "" Mar 19 22:02:09.123: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 19 22:02:09.123: INFO: validating pod update-demo-nautilus-nkg97 Mar 19 22:02:09.127: INFO: got data: { "image": "nautilus.jpg" } Mar 19 22:02:09.128: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 19 22:02:09.128: INFO: update-demo-nautilus-nkg97 is verified up and running STEP: rolling-update to new replication controller Mar 19 22:02:09.130: INFO: scanned /root for discovery docs: Mar 19 22:02:09.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6683' Mar 19 22:02:31.712: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 19 22:02:31.712: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 19 22:02:31.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6683' Mar 19 22:02:31.811: INFO: stderr: "" Mar 19 22:02:31.811: INFO: stdout: "update-demo-kitten-lc9sx update-demo-kitten-lj76k " Mar 19 22:02:31.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lc9sx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6683' Mar 19 22:02:31.907: INFO: stderr: "" Mar 19 22:02:31.907: INFO: stdout: "true" Mar 19 22:02:31.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lc9sx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6683' Mar 19 22:02:31.995: INFO: stderr: "" Mar 19 22:02:31.995: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 19 22:02:31.995: INFO: validating pod update-demo-kitten-lc9sx Mar 19 22:02:31.999: INFO: got data: { "image": "kitten.jpg" } Mar 19 22:02:31.999: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 19 22:02:31.999: INFO: update-demo-kitten-lc9sx is verified up and running Mar 19 22:02:31.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lj76k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6683' Mar 19 22:02:32.095: INFO: stderr: "" Mar 19 22:02:32.095: INFO: stdout: "true" Mar 19 22:02:32.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lj76k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6683' Mar 19 22:02:32.193: INFO: stderr: "" Mar 19 22:02:32.193: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 19 22:02:32.193: INFO: validating pod update-demo-kitten-lj76k Mar 19 22:02:32.198: INFO: got data: { "image": "kitten.jpg" } Mar 19 22:02:32.198: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 19 22:02:32.198: INFO: update-demo-kitten-lj76k is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:02:32.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6683" for this suite. • [SLOW TEST:29.142 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":193,"skipped":3137,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:02:32.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-f07cd835-5f11-4717-9b86-2976402094cd STEP: Creating a pod to test consume secrets Mar 19 22:02:32.291: INFO: Waiting up to 5m0s for pod "pod-secrets-19f94357-e188-485d-b611-4b84f0c45bba" in namespace "secrets-3694" to be "success or failure" Mar 19 22:02:32.298: INFO: Pod "pod-secrets-19f94357-e188-485d-b611-4b84f0c45bba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.544062ms Mar 19 22:02:34.302: INFO: Pod "pod-secrets-19f94357-e188-485d-b611-4b84f0c45bba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010670784s Mar 19 22:02:36.307: INFO: Pod "pod-secrets-19f94357-e188-485d-b611-4b84f0c45bba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01531895s STEP: Saw pod success Mar 19 22:02:36.307: INFO: Pod "pod-secrets-19f94357-e188-485d-b611-4b84f0c45bba" satisfied condition "success or failure" Mar 19 22:02:36.310: INFO: Trying to get logs from node jerma-worker pod pod-secrets-19f94357-e188-485d-b611-4b84f0c45bba container secret-volume-test: STEP: delete the pod Mar 19 22:02:36.326: INFO: Waiting for pod pod-secrets-19f94357-e188-485d-b611-4b84f0c45bba to disappear Mar 19 22:02:36.330: INFO: Pod pod-secrets-19f94357-e188-485d-b611-4b84f0c45bba no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:02:36.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3694" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3153,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:02:36.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 19 22:02:36.393: INFO: Waiting up to 5m0s for pod "pod-2ea09e09-31b6-4e4e-91b8-a3cc2fbaffaa" in namespace "emptydir-6021" to be "success or failure" Mar 19 22:02:36.409: INFO: Pod "pod-2ea09e09-31b6-4e4e-91b8-a3cc2fbaffaa": Phase="Pending", Reason="", readiness=false. Elapsed: 16.439266ms Mar 19 22:02:38.414: INFO: Pod "pod-2ea09e09-31b6-4e4e-91b8-a3cc2fbaffaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021280477s Mar 19 22:02:40.418: INFO: Pod "pod-2ea09e09-31b6-4e4e-91b8-a3cc2fbaffaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025365061s STEP: Saw pod success Mar 19 22:02:40.418: INFO: Pod "pod-2ea09e09-31b6-4e4e-91b8-a3cc2fbaffaa" satisfied condition "success or failure" Mar 19 22:02:40.421: INFO: Trying to get logs from node jerma-worker2 pod pod-2ea09e09-31b6-4e4e-91b8-a3cc2fbaffaa container test-container: STEP: delete the pod Mar 19 22:02:40.446: INFO: Waiting for pod pod-2ea09e09-31b6-4e4e-91b8-a3cc2fbaffaa to disappear Mar 19 22:02:40.450: INFO: Pod pod-2ea09e09-31b6-4e4e-91b8-a3cc2fbaffaa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:02:40.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6021" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3162,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:02:40.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 19 22:02:40.631: INFO: Waiting up to 5m0s for pod "pod-1bc53ab7-8efc-4ae8-b1cd-91a0da36e97e" in namespace "emptydir-6829" to be "success or failure" Mar 19 22:02:40.646: INFO: Pod "pod-1bc53ab7-8efc-4ae8-b1cd-91a0da36e97e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.253985ms Mar 19 22:02:42.649: INFO: Pod "pod-1bc53ab7-8efc-4ae8-b1cd-91a0da36e97e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018303229s Mar 19 22:02:44.652: INFO: Pod "pod-1bc53ab7-8efc-4ae8-b1cd-91a0da36e97e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021588416s STEP: Saw pod success Mar 19 22:02:44.652: INFO: Pod "pod-1bc53ab7-8efc-4ae8-b1cd-91a0da36e97e" satisfied condition "success or failure" Mar 19 22:02:44.655: INFO: Trying to get logs from node jerma-worker2 pod pod-1bc53ab7-8efc-4ae8-b1cd-91a0da36e97e container test-container: STEP: delete the pod Mar 19 22:02:44.671: INFO: Waiting for pod pod-1bc53ab7-8efc-4ae8-b1cd-91a0da36e97e to disappear Mar 19 22:02:44.676: INFO: Pod pod-1bc53ab7-8efc-4ae8-b1cd-91a0da36e97e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:02:44.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6829" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3168,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:02:44.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-93ee00dc-7af7-4117-be90-2204b58a58bd STEP: Creating a pod to test consume configMaps Mar 19 22:02:44.738: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-680106c0-e706-4704-a5af-7c7bee60df84" in namespace "projected-5175" to be "success or failure" Mar 19 22:02:44.749: INFO: Pod "pod-projected-configmaps-680106c0-e706-4704-a5af-7c7bee60df84": Phase="Pending", Reason="", readiness=false. Elapsed: 11.279154ms Mar 19 22:02:46.755: INFO: Pod "pod-projected-configmaps-680106c0-e706-4704-a5af-7c7bee60df84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016956583s Mar 19 22:02:48.758: INFO: Pod "pod-projected-configmaps-680106c0-e706-4704-a5af-7c7bee60df84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020622523s STEP: Saw pod success Mar 19 22:02:48.758: INFO: Pod "pod-projected-configmaps-680106c0-e706-4704-a5af-7c7bee60df84" satisfied condition "success or failure" Mar 19 22:02:48.760: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-680106c0-e706-4704-a5af-7c7bee60df84 container projected-configmap-volume-test: STEP: delete the pod Mar 19 22:02:48.952: INFO: Waiting for pod pod-projected-configmaps-680106c0-e706-4704-a5af-7c7bee60df84 to disappear Mar 19 22:02:48.963: INFO: Pod pod-projected-configmaps-680106c0-e706-4704-a5af-7c7bee60df84 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:02:48.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5175" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3243,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:02:48.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1382 STEP: creating the pod Mar 19 22:02:49.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1788' Mar 19 22:02:49.457: INFO: stderr: "" Mar 19 22:02:49.457: INFO: stdout: "pod/pause created\n" Mar 19 22:02:49.457: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 19 22:02:49.458: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1788" to be "running and ready" Mar 19 22:02:49.471: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 13.000918ms Mar 19 22:02:51.475: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017105135s Mar 19 22:02:53.479: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.021174377s Mar 19 22:02:53.479: INFO: Pod "pause" satisfied condition "running and ready" Mar 19 22:02:53.479: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Mar 19 22:02:53.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1788' Mar 19 22:02:53.582: INFO: stderr: "" Mar 19 22:02:53.582: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 19 22:02:53.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1788' Mar 19 22:02:53.675: INFO: stderr: "" Mar 19 22:02:53.675: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 19 22:02:53.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1788' Mar 19 22:02:53.806: INFO: stderr: "" Mar 19 22:02:53.806: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 19 22:02:53.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1788' Mar 19 22:02:53.895: INFO: stderr: "" Mar 19 22:02:53.895: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 STEP: using delete to clean up resources Mar 19 22:02:53.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1788' Mar 19 22:02:53.992: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 19 22:02:53.992: INFO: stdout: "pod \"pause\" force deleted\n" Mar 19 22:02:53.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1788' Mar 19 22:02:54.088: INFO: stderr: "No resources found in kubectl-1788 namespace.\n" Mar 19 22:02:54.088: INFO: stdout: "" Mar 19 22:02:54.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1788 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 19 22:02:54.271: INFO: stderr: "" Mar 19 22:02:54.271: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:02:54.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1788" for this suite. • [SLOW TEST:5.302 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1379 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":198,"skipped":3286,"failed":0} [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:02:54.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 19 22:02:54.473: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9c438d04-dab1-41fa-b475-e94cd8e7e2a1" in namespace "downward-api-7307" to be "success or failure" Mar 19 22:02:54.536: INFO: Pod "downwardapi-volume-9c438d04-dab1-41fa-b475-e94cd8e7e2a1": Phase="Pending", Reason="", readiness=false. Elapsed: 62.9263ms Mar 19 22:02:56.540: INFO: Pod "downwardapi-volume-9c438d04-dab1-41fa-b475-e94cd8e7e2a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067422354s Mar 19 22:02:58.544: INFO: Pod "downwardapi-volume-9c438d04-dab1-41fa-b475-e94cd8e7e2a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071444576s STEP: Saw pod success Mar 19 22:02:58.545: INFO: Pod "downwardapi-volume-9c438d04-dab1-41fa-b475-e94cd8e7e2a1" satisfied condition "success or failure" Mar 19 22:02:58.548: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9c438d04-dab1-41fa-b475-e94cd8e7e2a1 container client-container: STEP: delete the pod Mar 19 22:02:58.566: INFO: Waiting for pod downwardapi-volume-9c438d04-dab1-41fa-b475-e94cd8e7e2a1 to disappear Mar 19 22:02:58.570: INFO: Pod downwardapi-volume-9c438d04-dab1-41fa-b475-e94cd8e7e2a1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:02:58.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7307" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3286,"failed":0} ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:02:58.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 22:02:58.660: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 19 22:02:58.685: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:02:58.690: INFO: Number of nodes with available pods: 0 Mar 19 22:02:58.690: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:02:59.713: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:02:59.757: INFO: Number of nodes with available pods: 0 Mar 19 22:02:59.757: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:03:00.694: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:00.698: INFO: Number of nodes with available pods: 0 Mar 19 22:03:00.698: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:03:01.695: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:01.758: INFO: Number of nodes with available pods: 2 Mar 19 22:03:01.758: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 19 22:03:01.806: INFO: Wrong image for pod: daemon-set-jjw6g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:01.806: INFO: Wrong image for pod: daemon-set-w56w7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:01.821: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:02.826: INFO: Wrong image for pod: daemon-set-jjw6g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:02.826: INFO: Wrong image for pod: daemon-set-w56w7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:02.830: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:03.896: INFO: Wrong image for pod: daemon-set-jjw6g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:03.896: INFO: Wrong image for pod: daemon-set-w56w7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:03.900: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:04.825: INFO: Wrong image for pod: daemon-set-jjw6g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:04.825: INFO: Pod daemon-set-jjw6g is not available Mar 19 22:03:04.825: INFO: Wrong image for pod: daemon-set-w56w7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:04.828: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:05.826: INFO: Wrong image for pod: daemon-set-jjw6g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:05.826: INFO: Pod daemon-set-jjw6g is not available Mar 19 22:03:05.826: INFO: Wrong image for pod: daemon-set-w56w7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:05.830: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:06.826: INFO: Wrong image for pod: daemon-set-jjw6g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:06.826: INFO: Pod daemon-set-jjw6g is not available Mar 19 22:03:06.826: INFO: Wrong image for pod: daemon-set-w56w7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:06.830: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:07.825: INFO: Wrong image for pod: daemon-set-jjw6g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:07.825: INFO: Pod daemon-set-jjw6g is not available Mar 19 22:03:07.825: INFO: Wrong image for pod: daemon-set-w56w7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:07.829: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:08.825: INFO: Wrong image for pod: daemon-set-jjw6g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:08.825: INFO: Pod daemon-set-jjw6g is not available Mar 19 22:03:08.825: INFO: Wrong image for pod: daemon-set-w56w7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:08.829: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:09.825: INFO: Pod daemon-set-ghfbx is not available Mar 19 22:03:09.825: INFO: Wrong image for pod: daemon-set-w56w7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:09.828: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:11.153: INFO: Pod daemon-set-ghfbx is not available Mar 19 22:03:11.153: INFO: Wrong image for pod: daemon-set-w56w7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:11.157: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:11.835: INFO: Pod daemon-set-ghfbx is not available Mar 19 22:03:11.835: INFO: Wrong image for pod: daemon-set-w56w7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:11.839: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:12.824: INFO: Wrong image for pod: daemon-set-w56w7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:12.828: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:13.825: INFO: Wrong image for pod: daemon-set-w56w7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:13.825: INFO: Pod daemon-set-w56w7 is not available Mar 19 22:03:13.828: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:14.826: INFO: Wrong image for pod: daemon-set-w56w7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:14.826: INFO: Pod daemon-set-w56w7 is not available Mar 19 22:03:14.830: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:15.824: INFO: Wrong image for pod: daemon-set-w56w7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:15.824: INFO: Pod daemon-set-w56w7 is not available Mar 19 22:03:15.828: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:16.825: INFO: Wrong image for pod: daemon-set-w56w7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:16.826: INFO: Pod daemon-set-w56w7 is not available Mar 19 22:03:16.829: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:17.824: INFO: Wrong image for pod: daemon-set-w56w7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:17.825: INFO: Pod daemon-set-w56w7 is not available Mar 19 22:03:17.835: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:18.826: INFO: Wrong image for pod: daemon-set-w56w7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 19 22:03:18.826: INFO: Pod daemon-set-w56w7 is not available Mar 19 22:03:18.830: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:19.826: INFO: Pod daemon-set-cbj82 is not available Mar 19 22:03:19.830: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 19 22:03:19.834: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:19.840: INFO: Number of nodes with available pods: 1 Mar 19 22:03:19.840: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:03:20.859: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:20.919: INFO: Number of nodes with available pods: 1 Mar 19 22:03:20.919: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:03:21.848: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:03:21.851: INFO: Number of nodes with available pods: 2 Mar 19 22:03:21.851: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-22, will wait for the garbage collector to delete the pods Mar 19 22:03:21.923: INFO: Deleting DaemonSet.extensions daemon-set took: 6.188926ms Mar 19 22:03:22.223: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.266605ms Mar 19 22:03:25.751: INFO: Number of nodes with available pods: 0 Mar 19 22:03:25.751: INFO: Number of running nodes: 0, number of available pods: 0 Mar 19 22:03:25.754: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-22/daemonsets","resourceVersion":"1128436"},"items":null} Mar 19 22:03:25.757: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-22/pods","resourceVersion":"1128436"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:03:25.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-22" for this suite. • [SLOW TEST:27.196 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":200,"skipped":3286,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:03:25.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-87.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-87.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-87.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-87.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-87.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-87.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-87.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-87.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-87.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-87.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-87.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 172.146.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.146.172_udp@PTR;check="$$(dig +tcp +noall +answer +search 172.146.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.146.172_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-87.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-87.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-87.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-87.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-87.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-87.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-87.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-87.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-87.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-87.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-87.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 172.146.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.146.172_udp@PTR;check="$$(dig +tcp +noall +answer +search 172.146.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.146.172_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 19 22:03:31.935: INFO: Unable to read wheezy_udp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:31.938: INFO: Unable to read wheezy_tcp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:31.941: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:31.944: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:31.968: INFO: Unable to read jessie_udp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:31.971: INFO: Unable to read jessie_tcp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:31.974: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:31.976: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:31.993: INFO: Lookups using dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483 failed for: [wheezy_udp@dns-test-service.dns-87.svc.cluster.local wheezy_tcp@dns-test-service.dns-87.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local jessie_udp@dns-test-service.dns-87.svc.cluster.local jessie_tcp@dns-test-service.dns-87.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local] Mar 19 22:03:36.998: INFO: Unable to read wheezy_udp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:37.002: INFO: Unable to read wheezy_tcp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:37.005: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:37.009: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:37.032: INFO: Unable to read jessie_udp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:37.035: INFO: Unable to read jessie_tcp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:37.039: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:37.042: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:37.062: INFO: Lookups using dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483 failed for: [wheezy_udp@dns-test-service.dns-87.svc.cluster.local wheezy_tcp@dns-test-service.dns-87.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local jessie_udp@dns-test-service.dns-87.svc.cluster.local jessie_tcp@dns-test-service.dns-87.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local] Mar 19 22:03:41.998: INFO: Unable to read wheezy_udp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:42.001: INFO: Unable to read wheezy_tcp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:42.004: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:42.007: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:42.026: INFO: Unable to read jessie_udp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:42.029: INFO: Unable to read jessie_tcp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:42.031: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:42.034: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:42.052: INFO: Lookups using dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483 failed for: [wheezy_udp@dns-test-service.dns-87.svc.cluster.local wheezy_tcp@dns-test-service.dns-87.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local jessie_udp@dns-test-service.dns-87.svc.cluster.local jessie_tcp@dns-test-service.dns-87.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local] Mar 19 22:03:46.998: INFO: Unable to read wheezy_udp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:47.002: INFO: Unable to read wheezy_tcp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:47.006: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:47.009: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:47.027: INFO: Unable to read jessie_udp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:47.030: INFO: Unable to read jessie_tcp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:47.033: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:47.036: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:47.055: INFO: Lookups using dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483 failed for: [wheezy_udp@dns-test-service.dns-87.svc.cluster.local wheezy_tcp@dns-test-service.dns-87.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local jessie_udp@dns-test-service.dns-87.svc.cluster.local jessie_tcp@dns-test-service.dns-87.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local] Mar 19 22:03:51.998: INFO: Unable to read wheezy_udp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:52.001: INFO: Unable to read wheezy_tcp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:52.004: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:52.007: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:52.026: INFO: Unable to read jessie_udp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:52.029: INFO: Unable to read jessie_tcp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:52.032: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:52.035: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:52.050: INFO: Lookups using dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483 failed for: [wheezy_udp@dns-test-service.dns-87.svc.cluster.local wheezy_tcp@dns-test-service.dns-87.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local jessie_udp@dns-test-service.dns-87.svc.cluster.local jessie_tcp@dns-test-service.dns-87.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local] Mar 19 22:03:56.998: INFO: Unable to read wheezy_udp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:57.001: INFO: Unable to read wheezy_tcp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:57.005: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:57.008: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:57.053: INFO: Unable to read jessie_udp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:57.057: INFO: Unable to read jessie_tcp@dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:57.060: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:57.063: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local from pod dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483: the server could not find the requested resource (get pods dns-test-799a099f-e14e-484d-a126-77b551ef0483) Mar 19 22:03:57.081: INFO: Lookups using dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483 failed for: [wheezy_udp@dns-test-service.dns-87.svc.cluster.local wheezy_tcp@dns-test-service.dns-87.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local jessie_udp@dns-test-service.dns-87.svc.cluster.local jessie_tcp@dns-test-service.dns-87.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-87.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-87.svc.cluster.local] Mar 19 22:04:02.055: INFO: DNS probes using dns-87/dns-test-799a099f-e14e-484d-a126-77b551ef0483 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:04:02.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-87" for this suite. • [SLOW TEST:36.996 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":201,"skipped":3310,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:04:02.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 22:04:02.891: INFO: Create a RollingUpdate DaemonSet Mar 19 22:04:02.894: INFO: Check that daemon pods launch on every node of the cluster Mar 19 22:04:02.902: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:04:02.907: INFO: Number of nodes with available pods: 0 Mar 19 22:04:02.907: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:04:03.911: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:04:03.913: INFO: Number of nodes with available pods: 0 Mar 19 22:04:03.914: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:04:04.944: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:04:04.948: INFO: Number of nodes with available pods: 0 Mar 19 22:04:04.948: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:04:05.911: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:04:05.915: INFO: Number of nodes with available pods: 0 Mar 19 22:04:05.915: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:04:06.912: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:04:06.915: INFO: Number of nodes with available pods: 1 Mar 19 22:04:06.915: INFO: Node jerma-worker2 is running more than one daemon pod Mar 19 22:04:07.914: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:04:07.936: INFO: Number of nodes with available pods: 2 Mar 19 22:04:07.936: INFO: Number of running nodes: 2, number of available pods: 2 Mar 19 22:04:07.936: INFO: Update the DaemonSet to trigger a rollout Mar 19 22:04:07.960: INFO: Updating DaemonSet daemon-set Mar 19 22:04:19.983: INFO: Roll back the DaemonSet before rollout is complete Mar 19 22:04:19.990: INFO: Updating DaemonSet daemon-set Mar 19 22:04:19.990: INFO: Make sure DaemonSet rollback is complete Mar 19 22:04:19.996: INFO: Wrong image for pod: daemon-set-jtgkm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 19 22:04:19.996: INFO: Pod daemon-set-jtgkm is not available Mar 19 22:04:20.016: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:04:21.020: INFO: Wrong image for pod: daemon-set-jtgkm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 19 22:04:21.020: INFO: Pod daemon-set-jtgkm is not available Mar 19 22:04:21.024: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 19 22:04:22.021: INFO: Pod daemon-set-997dn is not available Mar 19 22:04:22.024: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5218, will wait for the garbage collector to delete the pods Mar 19 22:04:22.087: INFO: Deleting DaemonSet.extensions daemon-set took: 5.857039ms Mar 19 22:04:22.387: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.241934ms Mar 19 22:04:29.591: INFO: Number of nodes with available pods: 0 Mar 19 22:04:29.591: INFO: Number of running nodes: 0, number of available pods: 0 Mar 19 22:04:29.594: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5218/daemonsets","resourceVersion":"1128788"},"items":null} Mar 19 22:04:29.596: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5218/pods","resourceVersion":"1128788"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:04:29.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5218" for this suite. • [SLOW TEST:26.842 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":202,"skipped":3323,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:04:29.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1897 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 19 22:04:29.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-8619' Mar 19 22:04:29.817: INFO: stderr: "" Mar 19 22:04:29.817: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 19 22:04:34.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-8619 -o json' Mar 19 22:04:34.966: INFO: stderr: "" Mar 19 22:04:34.966: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-19T22:04:29Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8619\",\n \"resourceVersion\": \"1128809\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8619/pods/e2e-test-httpd-pod\",\n \"uid\": \"b8656795-3847-479a-841c-70b6baddae14\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-8hfcx\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-8hfcx\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-8hfcx\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-19T22:04:29Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-19T22:04:32Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-19T22:04:32Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-19T22:04:29Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://34f4740010ce93a7008c6810add1b9c37b2339cedc8cd27347b823cff53f7889\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-19T22:04:32Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.89\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.89\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-19T22:04:29Z\"\n }\n}\n" STEP: replace the image in the pod Mar 19 22:04:34.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8619' Mar 19 22:04:35.200: INFO: stderr: "" Mar 19 22:04:35.200: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1902 Mar 19 22:04:35.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8619' Mar 19 22:04:49.483: INFO: stderr: "" Mar 19 22:04:49.483: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:04:49.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8619" for this suite. • [SLOW TEST:19.894 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1893 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":203,"skipped":3325,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:04:49.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 19 22:04:49.574: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e70f754-ed5c-42a8-b794-d207abb79249" in namespace "projected-4476" to be "success or failure" Mar 19 22:04:49.589: INFO: Pod "downwardapi-volume-2e70f754-ed5c-42a8-b794-d207abb79249": Phase="Pending", Reason="", readiness=false. Elapsed: 14.923612ms Mar 19 22:04:51.593: INFO: Pod "downwardapi-volume-2e70f754-ed5c-42a8-b794-d207abb79249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019151684s Mar 19 22:04:53.597: INFO: Pod "downwardapi-volume-2e70f754-ed5c-42a8-b794-d207abb79249": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023365761s STEP: Saw pod success Mar 19 22:04:53.597: INFO: Pod "downwardapi-volume-2e70f754-ed5c-42a8-b794-d207abb79249" satisfied condition "success or failure" Mar 19 22:04:53.601: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-2e70f754-ed5c-42a8-b794-d207abb79249 container client-container: STEP: delete the pod Mar 19 22:04:53.633: INFO: Waiting for pod downwardapi-volume-2e70f754-ed5c-42a8-b794-d207abb79249 to disappear Mar 19 22:04:53.637: INFO: Pod downwardapi-volume-2e70f754-ed5c-42a8-b794-d207abb79249 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:04:53.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4476" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3334,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:04:53.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-12d64370-d05b-486c-a6e4-ac1506840494 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:04:53.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8349" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":205,"skipped":3346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:04:53.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 19 22:04:53.831: INFO: Waiting up to 5m0s for pod "downwardapi-volume-306cd123-fe37-4c05-9e2e-1695752453da" in namespace "projected-7262" to be "success or failure" Mar 19 22:04:53.873: INFO: Pod "downwardapi-volume-306cd123-fe37-4c05-9e2e-1695752453da": Phase="Pending", Reason="", readiness=false. Elapsed: 41.453953ms Mar 19 22:04:55.915: INFO: Pod "downwardapi-volume-306cd123-fe37-4c05-9e2e-1695752453da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08335784s Mar 19 22:04:57.919: INFO: Pod "downwardapi-volume-306cd123-fe37-4c05-9e2e-1695752453da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087362127s STEP: Saw pod success Mar 19 22:04:57.919: INFO: Pod "downwardapi-volume-306cd123-fe37-4c05-9e2e-1695752453da" satisfied condition "success or failure" Mar 19 22:04:57.922: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-306cd123-fe37-4c05-9e2e-1695752453da container client-container: STEP: delete the pod Mar 19 22:04:57.957: INFO: Waiting for pod downwardapi-volume-306cd123-fe37-4c05-9e2e-1695752453da to disappear Mar 19 22:04:57.962: INFO: Pod downwardapi-volume-306cd123-fe37-4c05-9e2e-1695752453da no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:04:57.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7262" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3372,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:04:57.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-1353 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 19 22:04:58.012: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 19 22:05:24.146: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.66:8080/dial?request=hostname&protocol=udp&host=10.244.1.65&port=8081&tries=1'] Namespace:pod-network-test-1353 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 22:05:24.146: INFO: >>> kubeConfig: /root/.kube/config I0319 22:05:24.177970 6 log.go:172] (0xc001738420) (0xc000a77d60) Create stream I0319 22:05:24.178002 6 log.go:172] (0xc001738420) (0xc000a77d60) Stream added, broadcasting: 1 I0319 22:05:24.180076 6 log.go:172] (0xc001738420) Reply frame received for 1 I0319 22:05:24.180142 6 log.go:172] (0xc001738420) (0xc001ae4000) Create stream I0319 22:05:24.180165 6 log.go:172] (0xc001738420) (0xc001ae4000) Stream added, broadcasting: 3 I0319 22:05:24.181849 6 log.go:172] (0xc001738420) Reply frame received for 3 I0319 22:05:24.181912 6 log.go:172] (0xc001738420) (0xc000a77ea0) Create stream I0319 22:05:24.181932 6 log.go:172] (0xc001738420) (0xc000a77ea0) Stream added, broadcasting: 5 I0319 22:05:24.183068 6 log.go:172] (0xc001738420) Reply frame received for 5 I0319 22:05:24.271722 6 log.go:172] (0xc001738420) Data frame received for 3 I0319 22:05:24.271777 6 log.go:172] (0xc001ae4000) (3) Data frame handling I0319 22:05:24.271802 6 log.go:172] (0xc001ae4000) (3) Data frame sent I0319 22:05:24.272080 6 log.go:172] (0xc001738420) Data frame received for 5 I0319 22:05:24.272110 6 log.go:172] (0xc000a77ea0) (5) Data frame handling I0319 22:05:24.272140 6 log.go:172] (0xc001738420) Data frame received for 3 I0319 22:05:24.272165 6 log.go:172] (0xc001ae4000) (3) Data frame handling I0319 22:05:24.273961 6 log.go:172] (0xc001738420) Data frame received for 1 I0319 22:05:24.273973 6 log.go:172] (0xc000a77d60) (1) Data frame handling I0319 22:05:24.273988 6 log.go:172] (0xc000a77d60) (1) Data frame sent I0319 22:05:24.274058 6 log.go:172] (0xc001738420) (0xc000a77d60) Stream removed, broadcasting: 1 I0319 22:05:24.274181 6 log.go:172] (0xc001738420) (0xc000a77d60) Stream removed, broadcasting: 1 I0319 22:05:24.274199 6 log.go:172] (0xc001738420) (0xc001ae4000) Stream removed, broadcasting: 3 I0319 22:05:24.274431 6 log.go:172] (0xc001738420) (0xc000a77ea0) Stream removed, broadcasting: 5 I0319 22:05:24.274504 6 log.go:172] (0xc001738420) Go away received Mar 19 22:05:24.274: INFO: Waiting for responses: map[] Mar 19 22:05:24.278: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.66:8080/dial?request=hostname&protocol=udp&host=10.244.2.91&port=8081&tries=1'] Namespace:pod-network-test-1353 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 22:05:24.278: INFO: >>> kubeConfig: /root/.kube/config I0319 22:05:24.314044 6 log.go:172] (0xc002a18580) (0xc0014bc500) Create stream I0319 22:05:24.314075 6 log.go:172] (0xc002a18580) (0xc0014bc500) Stream added, broadcasting: 1 I0319 22:05:24.316506 6 log.go:172] (0xc002a18580) Reply frame received for 1 I0319 22:05:24.316555 6 log.go:172] (0xc002a18580) (0xc0014bc640) Create stream I0319 22:05:24.316574 6 log.go:172] (0xc002a18580) (0xc0014bc640) Stream added, broadcasting: 3 I0319 22:05:24.317782 6 log.go:172] (0xc002a18580) Reply frame received for 3 I0319 22:05:24.317821 6 log.go:172] (0xc002a18580) (0xc0009921e0) Create stream I0319 22:05:24.317835 6 log.go:172] (0xc002a18580) (0xc0009921e0) Stream added, broadcasting: 5 I0319 22:05:24.318727 6 log.go:172] (0xc002a18580) Reply frame received for 5 I0319 22:05:24.383904 6 log.go:172] (0xc002a18580) Data frame received for 3 I0319 22:05:24.383953 6 log.go:172] (0xc0014bc640) (3) Data frame handling I0319 22:05:24.383983 6 log.go:172] (0xc0014bc640) (3) Data frame sent I0319 22:05:24.384436 6 log.go:172] (0xc002a18580) Data frame received for 3 I0319 22:05:24.384459 6 log.go:172] (0xc0014bc640) (3) Data frame handling I0319 22:05:24.384589 6 log.go:172] (0xc002a18580) Data frame received for 5 I0319 22:05:24.384621 6 log.go:172] (0xc0009921e0) (5) Data frame handling I0319 22:05:24.386286 6 log.go:172] (0xc002a18580) Data frame received for 1 I0319 22:05:24.386326 6 log.go:172] (0xc0014bc500) (1) Data frame handling I0319 22:05:24.386351 6 log.go:172] (0xc0014bc500) (1) Data frame sent I0319 22:05:24.386517 6 log.go:172] (0xc002a18580) (0xc0014bc500) Stream removed, broadcasting: 1 I0319 22:05:24.386582 6 log.go:172] (0xc002a18580) (0xc0014bc500) Stream removed, broadcasting: 1 I0319 22:05:24.386620 6 log.go:172] (0xc002a18580) (0xc0014bc640) Stream removed, broadcasting: 3 I0319 22:05:24.386637 6 log.go:172] (0xc002a18580) (0xc0009921e0) Stream removed, broadcasting: 5 Mar 19 22:05:24.386: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 I0319 22:05:24.386744 6 log.go:172] (0xc002a18580) Go away received Mar 19 22:05:24.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1353" for this suite. • [SLOW TEST:26.427 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3379,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:05:24.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 19 22:05:24.462: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 19 22:05:24.470: INFO: Waiting for terminating namespaces to be deleted... Mar 19 22:05:24.473: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 19 22:05:24.477: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 22:05:24.477: INFO: Container kindnet-cni ready: true, restart count 0 Mar 19 22:05:24.477: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 22:05:24.477: INFO: Container kube-proxy ready: true, restart count 0 Mar 19 22:05:24.477: INFO: netserver-0 from pod-network-test-1353 started at 2020-03-19 22:04:58 +0000 UTC (1 container statuses recorded) Mar 19 22:05:24.477: INFO: Container webserver ready: true, restart count 0 Mar 19 22:05:24.477: INFO: test-container-pod from pod-network-test-1353 started at 2020-03-19 22:05:20 +0000 UTC (1 container statuses recorded) Mar 19 22:05:24.477: INFO: Container webserver ready: true, restart count 0 Mar 19 22:05:24.477: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 19 22:05:24.481: INFO: netserver-1 from pod-network-test-1353 started at 2020-03-19 22:04:58 +0000 UTC (1 container statuses recorded) Mar 19 22:05:24.481: INFO: Container webserver ready: true, restart count 0 Mar 19 22:05:24.481: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 22:05:24.481: INFO: Container kindnet-cni ready: true, restart count 0 Mar 19 22:05:24.481: INFO: host-test-container-pod from pod-network-test-1353 started at 2020-03-19 22:05:20 +0000 UTC (1 container statuses recorded) Mar 19 22:05:24.481: INFO: Container agnhost ready: true, restart count 0 Mar 19 22:05:24.481: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 22:05:24.481: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fdd3cdbce53a4b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:05:25.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1029" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":208,"skipped":3387,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:05:25.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 19 22:05:25.610: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5cd45972-59c4-4782-8ac9-d690dc42c86a" in namespace "downward-api-5644" to be "success or failure" Mar 19 22:05:25.642: INFO: Pod "downwardapi-volume-5cd45972-59c4-4782-8ac9-d690dc42c86a": Phase="Pending", Reason="", readiness=false. Elapsed: 31.425473ms Mar 19 22:05:27.658: INFO: Pod "downwardapi-volume-5cd45972-59c4-4782-8ac9-d690dc42c86a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047417793s Mar 19 22:05:29.684: INFO: Pod "downwardapi-volume-5cd45972-59c4-4782-8ac9-d690dc42c86a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073684114s STEP: Saw pod success Mar 19 22:05:29.684: INFO: Pod "downwardapi-volume-5cd45972-59c4-4782-8ac9-d690dc42c86a" satisfied condition "success or failure" Mar 19 22:05:29.699: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-5cd45972-59c4-4782-8ac9-d690dc42c86a container client-container: STEP: delete the pod Mar 19 22:05:29.751: INFO: Waiting for pod downwardapi-volume-5cd45972-59c4-4782-8ac9-d690dc42c86a to disappear Mar 19 22:05:29.802: INFO: Pod downwardapi-volume-5cd45972-59c4-4782-8ac9-d690dc42c86a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:05:29.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5644" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3409,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:05:30.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 19 22:05:30.223: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f34e0203-e2c0-418b-b98f-6c3d640546ee" in namespace "projected-8158" to be "success or failure" Mar 19 22:05:30.431: INFO: Pod "downwardapi-volume-f34e0203-e2c0-418b-b98f-6c3d640546ee": Phase="Pending", Reason="", readiness=false. Elapsed: 207.384898ms Mar 19 22:05:32.434: INFO: Pod "downwardapi-volume-f34e0203-e2c0-418b-b98f-6c3d640546ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210986453s Mar 19 22:05:34.439: INFO: Pod "downwardapi-volume-f34e0203-e2c0-418b-b98f-6c3d640546ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.215233623s STEP: Saw pod success Mar 19 22:05:34.439: INFO: Pod "downwardapi-volume-f34e0203-e2c0-418b-b98f-6c3d640546ee" satisfied condition "success or failure" Mar 19 22:05:34.442: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f34e0203-e2c0-418b-b98f-6c3d640546ee container client-container: STEP: delete the pod Mar 19 22:05:34.462: INFO: Waiting for pod downwardapi-volume-f34e0203-e2c0-418b-b98f-6c3d640546ee to disappear Mar 19 22:05:34.484: INFO: Pod downwardapi-volume-f34e0203-e2c0-418b-b98f-6c3d640546ee no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:05:34.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8158" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3412,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:05:34.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 22:05:34.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 19 22:05:35.155: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-19T22:05:35Z generation:1 name:name1 resourceVersion:1129227 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:605d0930-56ba-4a58-887f-9717b5d88f6e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 19 22:05:45.159: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-19T22:05:45Z generation:1 name:name2 resourceVersion:1129267 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:7e7200f6-9bff-49f2-8080-90a1007c663f] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 19 22:05:55.165: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-19T22:05:35Z generation:2 name:name1 resourceVersion:1129296 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:605d0930-56ba-4a58-887f-9717b5d88f6e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 19 22:06:05.171: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-19T22:05:45Z generation:2 name:name2 resourceVersion:1129326 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:7e7200f6-9bff-49f2-8080-90a1007c663f] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 19 22:06:15.182: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-19T22:05:35Z generation:2 name:name1 resourceVersion:1129355 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:605d0930-56ba-4a58-887f-9717b5d88f6e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 19 22:06:25.190: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-19T22:05:45Z generation:2 name:name2 resourceVersion:1129384 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:7e7200f6-9bff-49f2-8080-90a1007c663f] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:06:35.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8028" for this suite. • [SLOW TEST:61.214 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":211,"skipped":3447,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:06:35.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 19 22:06:40.314: INFO: Successfully updated pod "labelsupdateece7b466-015b-48df-a401-f5a3871527ff" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:06:42.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9262" for this suite. • [SLOW TEST:6.666 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:06:42.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ee451b59-97ef-4d5f-8210-06c1cdb735c3 STEP: Creating a pod to test consume secrets Mar 19 22:06:42.473: INFO: Waiting up to 5m0s for pod "pod-secrets-47342bc0-5083-43b0-901e-b76f4906591f" in namespace "secrets-758" to be "success or failure" Mar 19 22:06:42.497: INFO: Pod "pod-secrets-47342bc0-5083-43b0-901e-b76f4906591f": Phase="Pending", Reason="", readiness=false. Elapsed: 24.866959ms Mar 19 22:06:44.501: INFO: Pod "pod-secrets-47342bc0-5083-43b0-901e-b76f4906591f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028342637s Mar 19 22:06:46.505: INFO: Pod "pod-secrets-47342bc0-5083-43b0-901e-b76f4906591f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032793722s STEP: Saw pod success Mar 19 22:06:46.505: INFO: Pod "pod-secrets-47342bc0-5083-43b0-901e-b76f4906591f" satisfied condition "success or failure" Mar 19 22:06:46.509: INFO: Trying to get logs from node jerma-worker pod pod-secrets-47342bc0-5083-43b0-901e-b76f4906591f container secret-volume-test: STEP: delete the pod Mar 19 22:06:46.527: INFO: Waiting for pod pod-secrets-47342bc0-5083-43b0-901e-b76f4906591f to disappear Mar 19 22:06:46.544: INFO: Pod pod-secrets-47342bc0-5083-43b0-901e-b76f4906591f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:06:46.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-758" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3494,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:06:46.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:06:46.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9021" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":214,"skipped":3537,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:06:46.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-ad08ab28-39c2-4b5d-8dd1-4ef3875d200a STEP: Creating configMap with name cm-test-opt-upd-a31bdfef-53a8-4020-b137-eaf27fe280ae STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ad08ab28-39c2-4b5d-8dd1-4ef3875d200a STEP: Updating configmap cm-test-opt-upd-a31bdfef-53a8-4020-b137-eaf27fe280ae STEP: Creating configMap with name cm-test-opt-create-72c966f2-8874-4414-a748-9c7defbfa16b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:06:56.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7003" for this suite. • [SLOW TEST:10.205 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3545,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:06:56.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-e31965d3-7ef5-4747-b655-77b683b91497 STEP: Creating a pod to test consume secrets Mar 19 22:06:57.032: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6d363e11-b406-4835-9f8f-8f4c7aaefd72" in namespace "projected-2870" to be "success or failure" Mar 19 22:06:57.035: INFO: Pod "pod-projected-secrets-6d363e11-b406-4835-9f8f-8f4c7aaefd72": Phase="Pending", Reason="", readiness=false. Elapsed: 3.597113ms Mar 19 22:06:59.040: INFO: Pod "pod-projected-secrets-6d363e11-b406-4835-9f8f-8f4c7aaefd72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008260971s Mar 19 22:07:01.102: INFO: Pod "pod-projected-secrets-6d363e11-b406-4835-9f8f-8f4c7aaefd72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070550753s STEP: Saw pod success Mar 19 22:07:01.102: INFO: Pod "pod-projected-secrets-6d363e11-b406-4835-9f8f-8f4c7aaefd72" satisfied condition "success or failure" Mar 19 22:07:01.105: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-6d363e11-b406-4835-9f8f-8f4c7aaefd72 container projected-secret-volume-test: STEP: delete the pod Mar 19 22:07:01.122: INFO: Waiting for pod pod-projected-secrets-6d363e11-b406-4835-9f8f-8f4c7aaefd72 to disappear Mar 19 22:07:01.138: INFO: Pod pod-projected-secrets-6d363e11-b406-4835-9f8f-8f4c7aaefd72 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:07:01.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2870" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3563,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:07:01.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 19 22:07:01.248: INFO: Waiting up to 5m0s for pod "pod-053c0af1-21d8-4565-9e40-221729b559ab" in namespace "emptydir-8463" to be "success or failure" Mar 19 22:07:01.258: INFO: Pod "pod-053c0af1-21d8-4565-9e40-221729b559ab": Phase="Pending", Reason="", readiness=false. Elapsed: 9.864053ms Mar 19 22:07:03.261: INFO: Pod "pod-053c0af1-21d8-4565-9e40-221729b559ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013490497s Mar 19 22:07:05.265: INFO: Pod "pod-053c0af1-21d8-4565-9e40-221729b559ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017220137s STEP: Saw pod success Mar 19 22:07:05.265: INFO: Pod "pod-053c0af1-21d8-4565-9e40-221729b559ab" satisfied condition "success or failure" Mar 19 22:07:05.268: INFO: Trying to get logs from node jerma-worker2 pod pod-053c0af1-21d8-4565-9e40-221729b559ab container test-container: STEP: delete the pod Mar 19 22:07:05.301: INFO: Waiting for pod pod-053c0af1-21d8-4565-9e40-221729b559ab to disappear Mar 19 22:07:05.312: INFO: Pod pod-053c0af1-21d8-4565-9e40-221729b559ab no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:07:05.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8463" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3566,"failed":0} S ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:07:05.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-5f27dc27-c774-42f0-a394-c9dba818d6a6 STEP: Creating secret with name s-test-opt-upd-ca09aede-cc2f-4537-8159-4978e74198b4 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5f27dc27-c774-42f0-a394-c9dba818d6a6 STEP: Updating secret s-test-opt-upd-ca09aede-cc2f-4537-8159-4978e74198b4 STEP: Creating secret with name s-test-opt-create-0250bd12-c7bd-4de9-926a-67c659cf740e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:08:19.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8912" for this suite. • [SLOW TEST:74.540 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3567,"failed":0} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:08:19.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:08:23.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7986" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3571,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:08:23.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 19 22:08:32.057: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 19 22:08:32.062: INFO: Pod pod-with-poststart-exec-hook still exists Mar 19 22:08:34.062: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 19 22:08:34.066: INFO: Pod pod-with-poststart-exec-hook still exists Mar 19 22:08:36.062: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 19 22:08:36.066: INFO: Pod pod-with-poststart-exec-hook still exists Mar 19 22:08:38.062: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 19 22:08:38.066: INFO: Pod pod-with-poststart-exec-hook still exists Mar 19 22:08:40.062: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 19 22:08:40.065: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:08:40.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-55" for this suite. • [SLOW TEST:16.182 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3588,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:08:40.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:08:53.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-844" for this suite. • [SLOW TEST:13.139 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":221,"skipped":3593,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:08:53.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 22:08:53.334: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-733e52a9-5ff7-4cc0-9e4e-d442d19d9aec" in namespace "security-context-test-3085" to be "success or failure" Mar 19 22:08:53.352: INFO: Pod "busybox-readonly-false-733e52a9-5ff7-4cc0-9e4e-d442d19d9aec": Phase="Pending", Reason="", readiness=false. Elapsed: 18.253475ms Mar 19 22:08:55.360: INFO: Pod "busybox-readonly-false-733e52a9-5ff7-4cc0-9e4e-d442d19d9aec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026453912s Mar 19 22:08:57.364: INFO: Pod "busybox-readonly-false-733e52a9-5ff7-4cc0-9e4e-d442d19d9aec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030561901s Mar 19 22:08:57.364: INFO: Pod "busybox-readonly-false-733e52a9-5ff7-4cc0-9e4e-d442d19d9aec" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:08:57.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3085" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3607,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:08:57.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:09:04.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7257" for this suite. • [SLOW TEST:7.201 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":223,"skipped":3614,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:09:04.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-e065d862-66aa-4aec-9aa9-b362fd219210 STEP: Creating a pod to test consume configMaps Mar 19 22:09:04.657: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-73e3d3b3-9de5-40b1-b4af-02bd01c9fdfe" in namespace "projected-9991" to be "success or failure" Mar 19 22:09:04.671: INFO: Pod "pod-projected-configmaps-73e3d3b3-9de5-40b1-b4af-02bd01c9fdfe": Phase="Pending", Reason="", readiness=false. Elapsed: 13.550465ms Mar 19 22:09:06.673: INFO: Pod "pod-projected-configmaps-73e3d3b3-9de5-40b1-b4af-02bd01c9fdfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016137237s Mar 19 22:09:08.699: INFO: Pod "pod-projected-configmaps-73e3d3b3-9de5-40b1-b4af-02bd01c9fdfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041181765s STEP: Saw pod success Mar 19 22:09:08.699: INFO: Pod "pod-projected-configmaps-73e3d3b3-9de5-40b1-b4af-02bd01c9fdfe" satisfied condition "success or failure" Mar 19 22:09:08.732: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-73e3d3b3-9de5-40b1-b4af-02bd01c9fdfe container projected-configmap-volume-test: STEP: delete the pod Mar 19 22:09:08.752: INFO: Waiting for pod pod-projected-configmaps-73e3d3b3-9de5-40b1-b4af-02bd01c9fdfe to disappear Mar 19 22:09:08.757: INFO: Pod pod-projected-configmaps-73e3d3b3-9de5-40b1-b4af-02bd01c9fdfe no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:09:08.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9991" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3639,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:09:08.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 19 22:09:08.825: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95a67a33-ed34-4ce4-8fd9-056af5d9dfea" in namespace "projected-5302" to be "success or failure" Mar 19 22:09:08.858: INFO: Pod "downwardapi-volume-95a67a33-ed34-4ce4-8fd9-056af5d9dfea": Phase="Pending", Reason="", readiness=false. Elapsed: 33.581573ms Mar 19 22:09:10.862: INFO: Pod "downwardapi-volume-95a67a33-ed34-4ce4-8fd9-056af5d9dfea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037159213s Mar 19 22:09:12.866: INFO: Pod "downwardapi-volume-95a67a33-ed34-4ce4-8fd9-056af5d9dfea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041033641s STEP: Saw pod success Mar 19 22:09:12.866: INFO: Pod "downwardapi-volume-95a67a33-ed34-4ce4-8fd9-056af5d9dfea" satisfied condition "success or failure" Mar 19 22:09:12.869: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-95a67a33-ed34-4ce4-8fd9-056af5d9dfea container client-container: STEP: delete the pod Mar 19 22:09:12.890: INFO: Waiting for pod downwardapi-volume-95a67a33-ed34-4ce4-8fd9-056af5d9dfea to disappear Mar 19 22:09:12.895: INFO: Pod downwardapi-volume-95a67a33-ed34-4ce4-8fd9-056af5d9dfea no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:09:12.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5302" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3696,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:09:12.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-1564/secret-test-bb00f933-8cec-4a14-b18b-271e4212b299 STEP: Creating a pod to test consume secrets Mar 19 22:09:12.978: INFO: Waiting up to 5m0s for pod "pod-configmaps-5c83a19f-1763-4e69-a963-6b6c54f4dfa3" in namespace "secrets-1564" to be "success or failure" Mar 19 22:09:13.012: INFO: Pod "pod-configmaps-5c83a19f-1763-4e69-a963-6b6c54f4dfa3": Phase="Pending", Reason="", readiness=false. Elapsed: 34.183948ms Mar 19 22:09:15.016: INFO: Pod "pod-configmaps-5c83a19f-1763-4e69-a963-6b6c54f4dfa3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038208923s Mar 19 22:09:17.020: INFO: Pod "pod-configmaps-5c83a19f-1763-4e69-a963-6b6c54f4dfa3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042422444s STEP: Saw pod success Mar 19 22:09:17.020: INFO: Pod "pod-configmaps-5c83a19f-1763-4e69-a963-6b6c54f4dfa3" satisfied condition "success or failure" Mar 19 22:09:17.023: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-5c83a19f-1763-4e69-a963-6b6c54f4dfa3 container env-test: STEP: delete the pod Mar 19 22:09:17.065: INFO: Waiting for pod pod-configmaps-5c83a19f-1763-4e69-a963-6b6c54f4dfa3 to disappear Mar 19 22:09:17.075: INFO: Pod pod-configmaps-5c83a19f-1763-4e69-a963-6b6c54f4dfa3 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:09:17.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1564" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3714,"failed":0} S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:09:17.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 19 22:09:17.179: INFO: Waiting up to 5m0s for pod "downward-api-1a375ab3-6fb7-48ee-a806-fee67c059305" in namespace "downward-api-2400" to be "success or failure" Mar 19 22:09:17.183: INFO: Pod "downward-api-1a375ab3-6fb7-48ee-a806-fee67c059305": Phase="Pending", Reason="", readiness=false. Elapsed: 3.443752ms Mar 19 22:09:19.188: INFO: Pod "downward-api-1a375ab3-6fb7-48ee-a806-fee67c059305": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008071727s Mar 19 22:09:21.191: INFO: Pod "downward-api-1a375ab3-6fb7-48ee-a806-fee67c059305": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01185005s STEP: Saw pod success Mar 19 22:09:21.191: INFO: Pod "downward-api-1a375ab3-6fb7-48ee-a806-fee67c059305" satisfied condition "success or failure" Mar 19 22:09:21.195: INFO: Trying to get logs from node jerma-worker2 pod downward-api-1a375ab3-6fb7-48ee-a806-fee67c059305 container dapi-container: STEP: delete the pod Mar 19 22:09:21.215: INFO: Waiting for pod downward-api-1a375ab3-6fb7-48ee-a806-fee67c059305 to disappear Mar 19 22:09:21.226: INFO: Pod downward-api-1a375ab3-6fb7-48ee-a806-fee67c059305 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:09:21.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2400" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3715,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:09:21.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 19 22:09:21.348: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4327a486-7ae1-439e-82e0-ee7fc773d95e" in namespace "projected-7868" to be "success or failure" Mar 19 22:09:21.351: INFO: Pod "downwardapi-volume-4327a486-7ae1-439e-82e0-ee7fc773d95e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.895473ms Mar 19 22:09:23.357: INFO: Pod "downwardapi-volume-4327a486-7ae1-439e-82e0-ee7fc773d95e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008489174s Mar 19 22:09:25.360: INFO: Pod "downwardapi-volume-4327a486-7ae1-439e-82e0-ee7fc773d95e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011833618s STEP: Saw pod success Mar 19 22:09:25.360: INFO: Pod "downwardapi-volume-4327a486-7ae1-439e-82e0-ee7fc773d95e" satisfied condition "success or failure" Mar 19 22:09:25.362: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4327a486-7ae1-439e-82e0-ee7fc773d95e container client-container: STEP: delete the pod Mar 19 22:09:25.394: INFO: Waiting for pod downwardapi-volume-4327a486-7ae1-439e-82e0-ee7fc773d95e to disappear Mar 19 22:09:25.399: INFO: Pod downwardapi-volume-4327a486-7ae1-439e-82e0-ee7fc773d95e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:09:25.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7868" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3726,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:09:25.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:09:25.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4389" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":229,"skipped":3762,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:09:25.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 22:09:25.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 19 22:09:25.751: INFO: stderr: "" Mar 19 22:09:25.751: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.3\", GitCommit:\"06ad960bfd03b39c8310aaf92d1e7c12ce618213\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:31:51Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:09:25.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9905" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":230,"skipped":3763,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:09:25.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 19 22:09:29.848: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:09:29.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2774" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3777,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:09:29.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:09:34.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9452" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3808,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:09:34.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 22:09:34.113: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:09:34.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1326" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":233,"skipped":3851,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:09:34.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-275d420c-70c5-4bf8-b88f-6a96aa9e5cfa STEP: Creating the pod STEP: Updating configmap configmap-test-upd-275d420c-70c5-4bf8-b88f-6a96aa9e5cfa STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:10:49.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8995" for this suite. • [SLOW TEST:74.481 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3878,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:10:49.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-ae25fae5-d6d2-4cb1-a3d2-d07f55a34f8d STEP: Creating a pod to test consume configMaps Mar 19 22:10:49.307: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-396d50ce-7f35-4f0d-be1b-9e724271f5ab" in namespace "projected-7178" to be "success or failure" Mar 19 22:10:49.333: INFO: Pod "pod-projected-configmaps-396d50ce-7f35-4f0d-be1b-9e724271f5ab": Phase="Pending", Reason="", readiness=false. Elapsed: 25.098255ms Mar 19 22:10:51.336: INFO: Pod "pod-projected-configmaps-396d50ce-7f35-4f0d-be1b-9e724271f5ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028627037s Mar 19 22:10:53.340: INFO: Pod "pod-projected-configmaps-396d50ce-7f35-4f0d-be1b-9e724271f5ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03287586s STEP: Saw pod success Mar 19 22:10:53.340: INFO: Pod "pod-projected-configmaps-396d50ce-7f35-4f0d-be1b-9e724271f5ab" satisfied condition "success or failure" Mar 19 22:10:53.343: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-396d50ce-7f35-4f0d-be1b-9e724271f5ab container projected-configmap-volume-test: STEP: delete the pod Mar 19 22:10:53.646: INFO: Waiting for pod pod-projected-configmaps-396d50ce-7f35-4f0d-be1b-9e724271f5ab to disappear Mar 19 22:10:53.669: INFO: Pod pod-projected-configmaps-396d50ce-7f35-4f0d-be1b-9e724271f5ab no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:10:53.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7178" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3881,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:10:53.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:10:57.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2480" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3890,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:10:57.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3451, will wait for the garbage collector to delete the pods Mar 19 22:11:03.980: INFO: Deleting Job.batch foo took: 19.924689ms Mar 19 22:11:04.281: INFO: Terminating Job.batch foo pods took: 300.239201ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:11:39.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3451" for this suite. • [SLOW TEST:41.493 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":237,"skipped":3910,"failed":0} S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:11:39.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 22:11:39.355: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-1633 I0319 22:11:39.397162 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1633, replica count: 1 I0319 22:11:40.447567 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0319 22:11:41.447789 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0319 22:11:42.448033 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 19 22:11:42.610: INFO: Created: latency-svc-vk9wm Mar 19 22:11:42.616: INFO: Got endpoints: latency-svc-vk9wm [68.026119ms] Mar 19 22:11:42.653: INFO: Created: latency-svc-794fq Mar 19 22:11:42.673: INFO: Got endpoints: latency-svc-794fq [56.687896ms] Mar 19 22:11:42.691: INFO: Created: latency-svc-mjggq Mar 19 22:11:42.707: INFO: Got endpoints: latency-svc-mjggq [91.482711ms] Mar 19 22:11:42.747: INFO: Created: latency-svc-wlxfk Mar 19 22:11:42.772: INFO: Got endpoints: latency-svc-wlxfk [155.808194ms] Mar 19 22:11:42.796: INFO: Created: latency-svc-n4662 Mar 19 22:11:42.811: INFO: Got endpoints: latency-svc-n4662 [194.703163ms] Mar 19 22:11:42.886: INFO: Created: latency-svc-jq5p9 Mar 19 22:11:42.911: INFO: Created: latency-svc-qbpsm Mar 19 22:11:42.912: INFO: Got endpoints: latency-svc-jq5p9 [295.180595ms] Mar 19 22:11:42.924: INFO: Got endpoints: latency-svc-qbpsm [307.674752ms] Mar 19 22:11:42.942: INFO: Created: latency-svc-tgd2v Mar 19 22:11:42.960: INFO: Got endpoints: latency-svc-tgd2v [344.197038ms] Mar 19 22:11:42.979: INFO: Created: latency-svc-vfcn2 Mar 19 22:11:43.034: INFO: Got endpoints: latency-svc-vfcn2 [418.231743ms] Mar 19 22:11:43.038: INFO: Created: latency-svc-xrnxt Mar 19 22:11:43.061: INFO: Got endpoints: latency-svc-xrnxt [444.93246ms] Mar 19 22:11:43.091: INFO: Created: latency-svc-ltfmn Mar 19 22:11:43.105: INFO: Got endpoints: latency-svc-ltfmn [488.686181ms] Mar 19 22:11:43.127: INFO: Created: latency-svc-hd47m Mar 19 22:11:43.192: INFO: Got endpoints: latency-svc-hd47m [575.857313ms] Mar 19 22:11:43.195: INFO: Created: latency-svc-cr7m5 Mar 19 22:11:43.201: INFO: Got endpoints: latency-svc-cr7m5 [584.981595ms] Mar 19 22:11:43.224: INFO: Created: latency-svc-h79hb Mar 19 22:11:43.238: INFO: Got endpoints: latency-svc-h79hb [621.557076ms] Mar 19 22:11:43.261: INFO: Created: latency-svc-df66c Mar 19 22:11:43.274: INFO: Got endpoints: latency-svc-df66c [657.546537ms] Mar 19 22:11:43.328: INFO: Created: latency-svc-d7js9 Mar 19 22:11:43.331: INFO: Got endpoints: latency-svc-d7js9 [714.750857ms] Mar 19 22:11:43.354: INFO: Created: latency-svc-npmpx Mar 19 22:11:43.416: INFO: Got endpoints: latency-svc-npmpx [743.605437ms] Mar 19 22:11:43.478: INFO: Created: latency-svc-4fkfp Mar 19 22:11:43.481: INFO: Got endpoints: latency-svc-4fkfp [773.042614ms] Mar 19 22:11:43.504: INFO: Created: latency-svc-xsl5k Mar 19 22:11:43.534: INFO: Got endpoints: latency-svc-xsl5k [762.337366ms] Mar 19 22:11:43.564: INFO: Created: latency-svc-bjk7p Mar 19 22:11:43.574: INFO: Got endpoints: latency-svc-bjk7p [763.492674ms] Mar 19 22:11:43.626: INFO: Created: latency-svc-mr8gk Mar 19 22:11:43.627: INFO: Got endpoints: latency-svc-mr8gk [715.415539ms] Mar 19 22:11:43.657: INFO: Created: latency-svc-dk6h6 Mar 19 22:11:43.665: INFO: Got endpoints: latency-svc-dk6h6 [741.520338ms] Mar 19 22:11:43.696: INFO: Created: latency-svc-b7fzl Mar 19 22:11:43.714: INFO: Got endpoints: latency-svc-b7fzl [753.391146ms] Mar 19 22:11:43.753: INFO: Created: latency-svc-pxc27 Mar 19 22:11:43.762: INFO: Got endpoints: latency-svc-pxc27 [727.596023ms] Mar 19 22:11:43.795: INFO: Created: latency-svc-n5rbt Mar 19 22:11:43.804: INFO: Got endpoints: latency-svc-n5rbt [743.292416ms] Mar 19 22:11:43.825: INFO: Created: latency-svc-25sbj Mar 19 22:11:43.834: INFO: Got endpoints: latency-svc-25sbj [729.363939ms] Mar 19 22:11:43.914: INFO: Created: latency-svc-ksjsk Mar 19 22:11:43.967: INFO: Got endpoints: latency-svc-ksjsk [775.094571ms] Mar 19 22:11:44.002: INFO: Created: latency-svc-pcl25 Mar 19 22:11:44.047: INFO: Got endpoints: latency-svc-pcl25 [845.565062ms] Mar 19 22:11:44.050: INFO: Created: latency-svc-snf84 Mar 19 22:11:44.063: INFO: Got endpoints: latency-svc-snf84 [825.052501ms] Mar 19 22:11:44.083: INFO: Created: latency-svc-ztp6v Mar 19 22:11:44.093: INFO: Got endpoints: latency-svc-ztp6v [819.53009ms] Mar 19 22:11:44.119: INFO: Created: latency-svc-whfdk Mar 19 22:11:44.130: INFO: Got endpoints: latency-svc-whfdk [798.979165ms] Mar 19 22:11:44.190: INFO: Created: latency-svc-24b6b Mar 19 22:11:44.193: INFO: Got endpoints: latency-svc-24b6b [776.710813ms] Mar 19 22:11:44.218: INFO: Created: latency-svc-gbcsf Mar 19 22:11:44.232: INFO: Got endpoints: latency-svc-gbcsf [751.613151ms] Mar 19 22:11:44.255: INFO: Created: latency-svc-lwdhs Mar 19 22:11:44.269: INFO: Got endpoints: latency-svc-lwdhs [734.649265ms] Mar 19 22:11:44.287: INFO: Created: latency-svc-9hngp Mar 19 22:11:44.346: INFO: Got endpoints: latency-svc-9hngp [771.257184ms] Mar 19 22:11:44.359: INFO: Created: latency-svc-52vcl Mar 19 22:11:44.372: INFO: Got endpoints: latency-svc-52vcl [744.411346ms] Mar 19 22:11:44.392: INFO: Created: latency-svc-9hmgw Mar 19 22:11:44.408: INFO: Got endpoints: latency-svc-9hmgw [742.901634ms] Mar 19 22:11:44.434: INFO: Created: latency-svc-gp8cv Mar 19 22:11:44.507: INFO: Got endpoints: latency-svc-gp8cv [793.608143ms] Mar 19 22:11:44.538: INFO: Created: latency-svc-qzf82 Mar 19 22:11:44.553: INFO: Got endpoints: latency-svc-qzf82 [791.051801ms] Mar 19 22:11:44.581: INFO: Created: latency-svc-nt59x Mar 19 22:11:44.595: INFO: Got endpoints: latency-svc-nt59x [790.043601ms] Mar 19 22:11:44.651: INFO: Created: latency-svc-x67j6 Mar 19 22:11:44.668: INFO: Got endpoints: latency-svc-x67j6 [833.75438ms] Mar 19 22:11:44.710: INFO: Created: latency-svc-gd9bd Mar 19 22:11:44.727: INFO: Got endpoints: latency-svc-gd9bd [759.782989ms] Mar 19 22:11:44.748: INFO: Created: latency-svc-tlbz7 Mar 19 22:11:44.801: INFO: Got endpoints: latency-svc-tlbz7 [753.874549ms] Mar 19 22:11:44.803: INFO: Created: latency-svc-gd8mt Mar 19 22:11:44.806: INFO: Got endpoints: latency-svc-gd8mt [743.482174ms] Mar 19 22:11:44.854: INFO: Created: latency-svc-gmblw Mar 19 22:11:44.871: INFO: Got endpoints: latency-svc-gmblw [777.810822ms] Mar 19 22:11:44.897: INFO: Created: latency-svc-5m67m Mar 19 22:11:44.956: INFO: Got endpoints: latency-svc-5m67m [826.66992ms] Mar 19 22:11:44.959: INFO: Created: latency-svc-l2npz Mar 19 22:11:44.968: INFO: Got endpoints: latency-svc-l2npz [774.736652ms] Mar 19 22:11:44.995: INFO: Created: latency-svc-q7w94 Mar 19 22:11:45.004: INFO: Got endpoints: latency-svc-q7w94 [772.094078ms] Mar 19 22:11:45.034: INFO: Created: latency-svc-5rhzs Mar 19 22:11:45.100: INFO: Got endpoints: latency-svc-5rhzs [830.720954ms] Mar 19 22:11:45.106: INFO: Created: latency-svc-9btmk Mar 19 22:11:45.119: INFO: Got endpoints: latency-svc-9btmk [772.69977ms] Mar 19 22:11:45.169: INFO: Created: latency-svc-vnzjm Mar 19 22:11:45.179: INFO: Got endpoints: latency-svc-vnzjm [807.196708ms] Mar 19 22:11:45.238: INFO: Created: latency-svc-7dt96 Mar 19 22:11:45.245: INFO: Got endpoints: latency-svc-7dt96 [837.145132ms] Mar 19 22:11:45.268: INFO: Created: latency-svc-f8f9m Mar 19 22:11:45.282: INFO: Got endpoints: latency-svc-f8f9m [774.624943ms] Mar 19 22:11:45.310: INFO: Created: latency-svc-z6qh5 Mar 19 22:11:45.324: INFO: Got endpoints: latency-svc-z6qh5 [770.865284ms] Mar 19 22:11:45.422: INFO: Created: latency-svc-cshw8 Mar 19 22:11:45.423: INFO: Got endpoints: latency-svc-cshw8 [828.367326ms] Mar 19 22:11:45.456: INFO: Created: latency-svc-rvg46 Mar 19 22:11:45.469: INFO: Got endpoints: latency-svc-rvg46 [800.473646ms] Mar 19 22:11:45.496: INFO: Created: latency-svc-x7t6z Mar 19 22:11:45.512: INFO: Got endpoints: latency-svc-x7t6z [784.689431ms] Mar 19 22:11:45.550: INFO: Created: latency-svc-fbhg7 Mar 19 22:11:45.565: INFO: Got endpoints: latency-svc-fbhg7 [764.305818ms] Mar 19 22:11:45.595: INFO: Created: latency-svc-xm4rj Mar 19 22:11:45.624: INFO: Got endpoints: latency-svc-xm4rj [818.073086ms] Mar 19 22:11:45.681: INFO: Created: latency-svc-9lmgv Mar 19 22:11:45.741: INFO: Got endpoints: latency-svc-9lmgv [869.909289ms] Mar 19 22:11:45.743: INFO: Created: latency-svc-7rg9w Mar 19 22:11:45.789: INFO: Got endpoints: latency-svc-7rg9w [832.080574ms] Mar 19 22:11:45.833: INFO: Created: latency-svc-mfjv7 Mar 19 22:11:45.836: INFO: Got endpoints: latency-svc-mfjv7 [868.296855ms] Mar 19 22:11:45.883: INFO: Created: latency-svc-wfncg Mar 19 22:11:45.896: INFO: Got endpoints: latency-svc-wfncg [891.629866ms] Mar 19 22:11:45.929: INFO: Created: latency-svc-2wn4z Mar 19 22:11:45.986: INFO: Got endpoints: latency-svc-2wn4z [886.261896ms] Mar 19 22:11:45.997: INFO: Created: latency-svc-mnvcf Mar 19 22:11:46.021: INFO: Got endpoints: latency-svc-mnvcf [901.96282ms] Mar 19 22:11:46.063: INFO: Created: latency-svc-27fjz Mar 19 22:11:46.078: INFO: Got endpoints: latency-svc-27fjz [898.748521ms] Mar 19 22:11:46.142: INFO: Created: latency-svc-6m7p2 Mar 19 22:11:46.146: INFO: Got endpoints: latency-svc-6m7p2 [901.016198ms] Mar 19 22:11:46.175: INFO: Created: latency-svc-tkvq2 Mar 19 22:11:46.192: INFO: Got endpoints: latency-svc-tkvq2 [909.589733ms] Mar 19 22:11:46.213: INFO: Created: latency-svc-s54g8 Mar 19 22:11:46.228: INFO: Got endpoints: latency-svc-s54g8 [903.647476ms] Mar 19 22:11:46.286: INFO: Created: latency-svc-j8rcz Mar 19 22:11:46.296: INFO: Got endpoints: latency-svc-j8rcz [873.396596ms] Mar 19 22:11:46.330: INFO: Created: latency-svc-px48h Mar 19 22:11:46.355: INFO: Got endpoints: latency-svc-px48h [885.851145ms] Mar 19 22:11:46.423: INFO: Created: latency-svc-pvc49 Mar 19 22:11:46.426: INFO: Got endpoints: latency-svc-pvc49 [914.551121ms] Mar 19 22:11:46.458: INFO: Created: latency-svc-nf594 Mar 19 22:11:46.470: INFO: Got endpoints: latency-svc-nf594 [904.683802ms] Mar 19 22:11:46.492: INFO: Created: latency-svc-5l6nj Mar 19 22:11:46.505: INFO: Got endpoints: latency-svc-5l6nj [880.689614ms] Mar 19 22:11:46.573: INFO: Created: latency-svc-vtcxr Mar 19 22:11:46.576: INFO: Got endpoints: latency-svc-vtcxr [835.326285ms] Mar 19 22:11:46.600: INFO: Created: latency-svc-p2hjr Mar 19 22:11:46.614: INFO: Got endpoints: latency-svc-p2hjr [825.126815ms] Mar 19 22:11:46.644: INFO: Created: latency-svc-7m45s Mar 19 22:11:46.668: INFO: Got endpoints: latency-svc-7m45s [831.47107ms] Mar 19 22:11:46.725: INFO: Created: latency-svc-qdk7k Mar 19 22:11:46.728: INFO: Got endpoints: latency-svc-qdk7k [831.464603ms] Mar 19 22:11:46.756: INFO: Created: latency-svc-b5b9n Mar 19 22:11:46.770: INFO: Got endpoints: latency-svc-b5b9n [784.214369ms] Mar 19 22:11:46.792: INFO: Created: latency-svc-fx4dl Mar 19 22:11:46.807: INFO: Got endpoints: latency-svc-fx4dl [786.048428ms] Mar 19 22:11:46.879: INFO: Created: latency-svc-7bl4v Mar 19 22:11:46.881: INFO: Got endpoints: latency-svc-7bl4v [802.942ms] Mar 19 22:11:46.908: INFO: Created: latency-svc-lksdf Mar 19 22:11:46.921: INFO: Got endpoints: latency-svc-lksdf [774.37487ms] Mar 19 22:11:46.972: INFO: Created: latency-svc-pst8m Mar 19 22:11:47.004: INFO: Got endpoints: latency-svc-pst8m [812.67433ms] Mar 19 22:11:47.018: INFO: Created: latency-svc-z5nrh Mar 19 22:11:47.029: INFO: Got endpoints: latency-svc-z5nrh [801.485ms] Mar 19 22:11:47.052: INFO: Created: latency-svc-wfq5p Mar 19 22:11:47.066: INFO: Got endpoints: latency-svc-wfq5p [769.468456ms] Mar 19 22:11:47.088: INFO: Created: latency-svc-wjzmv Mar 19 22:11:47.102: INFO: Got endpoints: latency-svc-wjzmv [747.724745ms] Mar 19 22:11:47.155: INFO: Created: latency-svc-dv5mh Mar 19 22:11:47.158: INFO: Got endpoints: latency-svc-dv5mh [731.074604ms] Mar 19 22:11:47.188: INFO: Created: latency-svc-wqpmx Mar 19 22:11:47.205: INFO: Got endpoints: latency-svc-wqpmx [735.242413ms] Mar 19 22:11:47.224: INFO: Created: latency-svc-7l627 Mar 19 22:11:47.241: INFO: Got endpoints: latency-svc-7l627 [736.038774ms] Mar 19 22:11:47.298: INFO: Created: latency-svc-bhzzg Mar 19 22:11:47.302: INFO: Got endpoints: latency-svc-bhzzg [725.191501ms] Mar 19 22:11:47.340: INFO: Created: latency-svc-vgvpl Mar 19 22:11:47.350: INFO: Got endpoints: latency-svc-vgvpl [735.565107ms] Mar 19 22:11:47.370: INFO: Created: latency-svc-9b98l Mar 19 22:11:47.392: INFO: Got endpoints: latency-svc-9b98l [724.321124ms] Mar 19 22:11:47.462: INFO: Created: latency-svc-g2kqj Mar 19 22:11:47.470: INFO: Got endpoints: latency-svc-g2kqj [742.209838ms] Mar 19 22:11:47.494: INFO: Created: latency-svc-wt2tm Mar 19 22:11:47.507: INFO: Got endpoints: latency-svc-wt2tm [736.481976ms] Mar 19 22:11:47.527: INFO: Created: latency-svc-2zh67 Mar 19 22:11:47.537: INFO: Got endpoints: latency-svc-2zh67 [730.118204ms] Mar 19 22:11:47.598: INFO: Created: latency-svc-jwgsm Mar 19 22:11:47.601: INFO: Got endpoints: latency-svc-jwgsm [720.148928ms] Mar 19 22:11:47.638: INFO: Created: latency-svc-mwld6 Mar 19 22:11:47.651: INFO: Got endpoints: latency-svc-mwld6 [730.241302ms] Mar 19 22:11:47.675: INFO: Created: latency-svc-vljhl Mar 19 22:11:47.753: INFO: Got endpoints: latency-svc-vljhl [748.59387ms] Mar 19 22:11:47.755: INFO: Created: latency-svc-qfvsk Mar 19 22:11:47.759: INFO: Got endpoints: latency-svc-qfvsk [729.884938ms] Mar 19 22:11:47.808: INFO: Created: latency-svc-46j9z Mar 19 22:11:47.820: INFO: Got endpoints: latency-svc-46j9z [753.820327ms] Mar 19 22:11:47.842: INFO: Created: latency-svc-s7nxs Mar 19 22:11:47.879: INFO: Got endpoints: latency-svc-s7nxs [776.391106ms] Mar 19 22:11:47.890: INFO: Created: latency-svc-7fqjf Mar 19 22:11:47.904: INFO: Got endpoints: latency-svc-7fqjf [746.895329ms] Mar 19 22:11:47.934: INFO: Created: latency-svc-8kg85 Mar 19 22:11:47.947: INFO: Got endpoints: latency-svc-8kg85 [741.513404ms] Mar 19 22:11:47.976: INFO: Created: latency-svc-bs2sh Mar 19 22:11:48.028: INFO: Got endpoints: latency-svc-bs2sh [786.670605ms] Mar 19 22:11:48.046: INFO: Created: latency-svc-92ls8 Mar 19 22:11:48.063: INFO: Got endpoints: latency-svc-92ls8 [760.950032ms] Mar 19 22:11:48.112: INFO: Created: latency-svc-hp7ww Mar 19 22:11:48.129: INFO: Got endpoints: latency-svc-hp7ww [779.177326ms] Mar 19 22:11:48.180: INFO: Created: latency-svc-x9rn2 Mar 19 22:11:48.192: INFO: Got endpoints: latency-svc-x9rn2 [799.949835ms] Mar 19 22:11:48.226: INFO: Created: latency-svc-26x45 Mar 19 22:11:48.242: INFO: Got endpoints: latency-svc-26x45 [772.260282ms] Mar 19 22:11:48.268: INFO: Created: latency-svc-8zndg Mar 19 22:11:48.315: INFO: Got endpoints: latency-svc-8zndg [808.46437ms] Mar 19 22:11:48.317: INFO: Created: latency-svc-wx7r4 Mar 19 22:11:48.333: INFO: Got endpoints: latency-svc-wx7r4 [796.027333ms] Mar 19 22:11:48.354: INFO: Created: latency-svc-chhdx Mar 19 22:11:48.369: INFO: Got endpoints: latency-svc-chhdx [768.029241ms] Mar 19 22:11:48.397: INFO: Created: latency-svc-nwvxm Mar 19 22:11:48.405: INFO: Got endpoints: latency-svc-nwvxm [754.391857ms] Mar 19 22:11:48.459: INFO: Created: latency-svc-jt2qg Mar 19 22:11:48.490: INFO: Got endpoints: latency-svc-jt2qg [736.971783ms] Mar 19 22:11:48.491: INFO: Created: latency-svc-lncn8 Mar 19 22:11:48.515: INFO: Got endpoints: latency-svc-lncn8 [755.176038ms] Mar 19 22:11:48.622: INFO: Created: latency-svc-r89qw Mar 19 22:11:48.625: INFO: Got endpoints: latency-svc-r89qw [805.118379ms] Mar 19 22:11:48.646: INFO: Created: latency-svc-56tk5 Mar 19 22:11:48.670: INFO: Got endpoints: latency-svc-56tk5 [791.155013ms] Mar 19 22:11:48.707: INFO: Created: latency-svc-vzbk9 Mar 19 22:11:48.719: INFO: Got endpoints: latency-svc-vzbk9 [814.120833ms] Mar 19 22:11:48.783: INFO: Created: latency-svc-95dbl Mar 19 22:11:48.785: INFO: Got endpoints: latency-svc-95dbl [838.385636ms] Mar 19 22:11:48.811: INFO: Created: latency-svc-dhgmk Mar 19 22:11:48.832: INFO: Got endpoints: latency-svc-dhgmk [803.488702ms] Mar 19 22:11:48.856: INFO: Created: latency-svc-v42cd Mar 19 22:11:48.870: INFO: Got endpoints: latency-svc-v42cd [807.133143ms] Mar 19 22:11:48.930: INFO: Created: latency-svc-5gkh7 Mar 19 22:11:48.933: INFO: Got endpoints: latency-svc-5gkh7 [804.304872ms] Mar 19 22:11:48.972: INFO: Created: latency-svc-jqtxv Mar 19 22:11:48.984: INFO: Got endpoints: latency-svc-jqtxv [791.837682ms] Mar 19 22:11:49.003: INFO: Created: latency-svc-9l2p6 Mar 19 22:11:49.014: INFO: Got endpoints: latency-svc-9l2p6 [772.332374ms] Mar 19 22:11:49.058: INFO: Created: latency-svc-vvl7v Mar 19 22:11:49.061: INFO: Got endpoints: latency-svc-vvl7v [745.97417ms] Mar 19 22:11:49.084: INFO: Created: latency-svc-dvlc8 Mar 19 22:11:49.099: INFO: Got endpoints: latency-svc-dvlc8 [765.715541ms] Mar 19 22:11:49.123: INFO: Created: latency-svc-b6s56 Mar 19 22:11:49.135: INFO: Got endpoints: latency-svc-b6s56 [766.233555ms] Mar 19 22:11:49.152: INFO: Created: latency-svc-kc8ds Mar 19 22:11:49.196: INFO: Got endpoints: latency-svc-kc8ds [790.660644ms] Mar 19 22:11:49.206: INFO: Created: latency-svc-9vx9s Mar 19 22:11:49.220: INFO: Got endpoints: latency-svc-9vx9s [729.664005ms] Mar 19 22:11:49.240: INFO: Created: latency-svc-5mnlx Mar 19 22:11:49.270: INFO: Got endpoints: latency-svc-5mnlx [755.349629ms] Mar 19 22:11:49.358: INFO: Created: latency-svc-k5zrq Mar 19 22:11:49.361: INFO: Got endpoints: latency-svc-k5zrq [736.043211ms] Mar 19 22:11:49.432: INFO: Created: latency-svc-sm5mm Mar 19 22:11:49.449: INFO: Got endpoints: latency-svc-sm5mm [778.732006ms] Mar 19 22:11:49.483: INFO: Created: latency-svc-z2dkv Mar 19 22:11:49.487: INFO: Got endpoints: latency-svc-z2dkv [767.907651ms] Mar 19 22:11:49.536: INFO: Created: latency-svc-6mx8k Mar 19 22:11:49.553: INFO: Got endpoints: latency-svc-6mx8k [768.016147ms] Mar 19 22:11:49.578: INFO: Created: latency-svc-79dn8 Mar 19 22:11:49.633: INFO: Got endpoints: latency-svc-79dn8 [801.283031ms] Mar 19 22:11:49.640: INFO: Created: latency-svc-sphjg Mar 19 22:11:49.647: INFO: Got endpoints: latency-svc-sphjg [777.465187ms] Mar 19 22:11:49.666: INFO: Created: latency-svc-h7gld Mar 19 22:11:49.678: INFO: Got endpoints: latency-svc-h7gld [744.929815ms] Mar 19 22:11:49.696: INFO: Created: latency-svc-h28g8 Mar 19 22:11:49.708: INFO: Got endpoints: latency-svc-h28g8 [724.252125ms] Mar 19 22:11:49.772: INFO: Created: latency-svc-wlk2b Mar 19 22:11:49.774: INFO: Got endpoints: latency-svc-wlk2b [759.381419ms] Mar 19 22:11:49.801: INFO: Created: latency-svc-n6cc4 Mar 19 22:11:49.810: INFO: Got endpoints: latency-svc-n6cc4 [749.02658ms] Mar 19 22:11:49.828: INFO: Created: latency-svc-7kx7x Mar 19 22:11:49.853: INFO: Got endpoints: latency-svc-7kx7x [754.572198ms] Mar 19 22:11:49.870: INFO: Created: latency-svc-6jr7w Mar 19 22:11:49.920: INFO: Got endpoints: latency-svc-6jr7w [785.063613ms] Mar 19 22:11:49.950: INFO: Created: latency-svc-ktd6j Mar 19 22:11:49.972: INFO: Got endpoints: latency-svc-ktd6j [776.122168ms] Mar 19 22:11:50.017: INFO: Created: latency-svc-5qt6n Mar 19 22:11:50.076: INFO: Got endpoints: latency-svc-5qt6n [856.048105ms] Mar 19 22:11:50.092: INFO: Created: latency-svc-l74rx Mar 19 22:11:50.106: INFO: Got endpoints: latency-svc-l74rx [836.072104ms] Mar 19 22:11:50.128: INFO: Created: latency-svc-sbt6x Mar 19 22:11:50.142: INFO: Got endpoints: latency-svc-sbt6x [780.678741ms] Mar 19 22:11:50.172: INFO: Created: latency-svc-n6jqf Mar 19 22:11:50.220: INFO: Got endpoints: latency-svc-n6jqf [770.93118ms] Mar 19 22:11:50.233: INFO: Created: latency-svc-l62kp Mar 19 22:11:50.244: INFO: Got endpoints: latency-svc-l62kp [757.471586ms] Mar 19 22:11:50.272: INFO: Created: latency-svc-l6t2h Mar 19 22:11:50.286: INFO: Got endpoints: latency-svc-l6t2h [732.800393ms] Mar 19 22:11:50.308: INFO: Created: latency-svc-x2pxx Mar 19 22:11:50.387: INFO: Got endpoints: latency-svc-x2pxx [754.326747ms] Mar 19 22:11:50.390: INFO: Created: latency-svc-4p2qg Mar 19 22:11:50.416: INFO: Got endpoints: latency-svc-4p2qg [768.184705ms] Mar 19 22:11:50.430: INFO: Created: latency-svc-pwtqt Mar 19 22:11:50.447: INFO: Got endpoints: latency-svc-pwtqt [768.886821ms] Mar 19 22:11:50.464: INFO: Created: latency-svc-p8sgj Mar 19 22:11:50.485: INFO: Got endpoints: latency-svc-p8sgj [776.90048ms] Mar 19 22:11:50.555: INFO: Created: latency-svc-pvq2b Mar 19 22:11:50.559: INFO: Got endpoints: latency-svc-pvq2b [784.535117ms] Mar 19 22:11:50.586: INFO: Created: latency-svc-k99h2 Mar 19 22:11:50.600: INFO: Got endpoints: latency-svc-k99h2 [789.291083ms] Mar 19 22:11:50.622: INFO: Created: latency-svc-nscl4 Mar 19 22:11:50.636: INFO: Got endpoints: latency-svc-nscl4 [782.597405ms] Mar 19 22:11:50.742: INFO: Created: latency-svc-sqkvs Mar 19 22:11:50.744: INFO: Got endpoints: latency-svc-sqkvs [824.070059ms] Mar 19 22:11:50.770: INFO: Created: latency-svc-bpblq Mar 19 22:11:50.780: INFO: Got endpoints: latency-svc-bpblq [807.674442ms] Mar 19 22:11:50.800: INFO: Created: latency-svc-9v25s Mar 19 22:11:50.872: INFO: Got endpoints: latency-svc-9v25s [796.415701ms] Mar 19 22:11:50.886: INFO: Created: latency-svc-stg6k Mar 19 22:11:50.901: INFO: Got endpoints: latency-svc-stg6k [794.846902ms] Mar 19 22:11:50.922: INFO: Created: latency-svc-d5q7b Mar 19 22:11:50.953: INFO: Got endpoints: latency-svc-d5q7b [810.782033ms] Mar 19 22:11:51.005: INFO: Created: latency-svc-44q9m Mar 19 22:11:51.021: INFO: Got endpoints: latency-svc-44q9m [801.613877ms] Mar 19 22:11:51.046: INFO: Created: latency-svc-9lnp8 Mar 19 22:11:51.058: INFO: Got endpoints: latency-svc-9lnp8 [814.200211ms] Mar 19 22:11:51.078: INFO: Created: latency-svc-x2wdj Mar 19 22:11:51.095: INFO: Got endpoints: latency-svc-x2wdj [808.402149ms] Mar 19 22:11:51.142: INFO: Created: latency-svc-ggqgc Mar 19 22:11:51.145: INFO: Got endpoints: latency-svc-ggqgc [757.18383ms] Mar 19 22:11:51.196: INFO: Created: latency-svc-f6rgt Mar 19 22:11:51.212: INFO: Got endpoints: latency-svc-f6rgt [796.566882ms] Mar 19 22:11:51.231: INFO: Created: latency-svc-vjhs9 Mar 19 22:11:51.292: INFO: Got endpoints: latency-svc-vjhs9 [844.636667ms] Mar 19 22:11:51.294: INFO: Created: latency-svc-86qn9 Mar 19 22:11:51.299: INFO: Got endpoints: latency-svc-86qn9 [813.800628ms] Mar 19 22:11:51.331: INFO: Created: latency-svc-g4djg Mar 19 22:11:51.366: INFO: Got endpoints: latency-svc-g4djg [807.347833ms] Mar 19 22:11:51.388: INFO: Created: latency-svc-k2qdr Mar 19 22:11:51.466: INFO: Got endpoints: latency-svc-k2qdr [865.732495ms] Mar 19 22:11:51.467: INFO: Created: latency-svc-trflb Mar 19 22:11:51.474: INFO: Got endpoints: latency-svc-trflb [838.026816ms] Mar 19 22:11:51.492: INFO: Created: latency-svc-tf5h7 Mar 19 22:11:51.504: INFO: Got endpoints: latency-svc-tf5h7 [759.859226ms] Mar 19 22:11:51.522: INFO: Created: latency-svc-p9xtq Mar 19 22:11:51.546: INFO: Got endpoints: latency-svc-p9xtq [766.184932ms] Mar 19 22:11:51.603: INFO: Created: latency-svc-xnrnc Mar 19 22:11:51.613: INFO: Got endpoints: latency-svc-xnrnc [740.805654ms] Mar 19 22:11:51.635: INFO: Created: latency-svc-8lw6q Mar 19 22:11:51.643: INFO: Got endpoints: latency-svc-8lw6q [742.294838ms] Mar 19 22:11:51.679: INFO: Created: latency-svc-gqgd5 Mar 19 22:11:51.692: INFO: Got endpoints: latency-svc-gqgd5 [738.928649ms] Mar 19 22:11:51.747: INFO: Created: latency-svc-72798 Mar 19 22:11:51.752: INFO: Got endpoints: latency-svc-72798 [730.389389ms] Mar 19 22:11:51.777: INFO: Created: latency-svc-4g2wb Mar 19 22:11:51.794: INFO: Got endpoints: latency-svc-4g2wb [735.714488ms] Mar 19 22:11:51.814: INFO: Created: latency-svc-bz4wn Mar 19 22:11:51.872: INFO: Got endpoints: latency-svc-bz4wn [777.751293ms] Mar 19 22:11:51.880: INFO: Created: latency-svc-vqnlt Mar 19 22:11:51.897: INFO: Got endpoints: latency-svc-vqnlt [752.174685ms] Mar 19 22:11:51.918: INFO: Created: latency-svc-mw2kt Mar 19 22:11:51.933: INFO: Got endpoints: latency-svc-mw2kt [721.10481ms] Mar 19 22:11:51.972: INFO: Created: latency-svc-hmzxp Mar 19 22:11:52.052: INFO: Got endpoints: latency-svc-hmzxp [760.267556ms] Mar 19 22:11:52.054: INFO: Created: latency-svc-rn2tr Mar 19 22:11:52.059: INFO: Got endpoints: latency-svc-rn2tr [760.023277ms] Mar 19 22:11:52.084: INFO: Created: latency-svc-5r9nb Mar 19 22:11:52.110: INFO: Got endpoints: latency-svc-5r9nb [744.069914ms] Mar 19 22:11:52.146: INFO: Created: latency-svc-8wlpw Mar 19 22:11:52.226: INFO: Got endpoints: latency-svc-8wlpw [760.375079ms] Mar 19 22:11:52.228: INFO: Created: latency-svc-ggh7v Mar 19 22:11:52.234: INFO: Got endpoints: latency-svc-ggh7v [760.136426ms] Mar 19 22:11:52.251: INFO: Created: latency-svc-25qgg Mar 19 22:11:52.265: INFO: Got endpoints: latency-svc-25qgg [760.705126ms] Mar 19 22:11:52.288: INFO: Created: latency-svc-qwkbr Mar 19 22:11:52.301: INFO: Got endpoints: latency-svc-qwkbr [754.432171ms] Mar 19 22:11:52.376: INFO: Created: latency-svc-96qfn Mar 19 22:11:52.379: INFO: Got endpoints: latency-svc-96qfn [766.110627ms] Mar 19 22:11:52.398: INFO: Created: latency-svc-5mb8p Mar 19 22:11:52.417: INFO: Got endpoints: latency-svc-5mb8p [773.902222ms] Mar 19 22:11:52.438: INFO: Created: latency-svc-r6gnz Mar 19 22:11:52.452: INFO: Got endpoints: latency-svc-r6gnz [759.959283ms] Mar 19 22:11:52.474: INFO: Created: latency-svc-vrk88 Mar 19 22:11:52.531: INFO: Got endpoints: latency-svc-vrk88 [778.947098ms] Mar 19 22:11:52.554: INFO: Created: latency-svc-sw77l Mar 19 22:11:52.590: INFO: Got endpoints: latency-svc-sw77l [796.23778ms] Mar 19 22:11:52.614: INFO: Created: latency-svc-s4zkr Mar 19 22:11:52.626: INFO: Got endpoints: latency-svc-s4zkr [753.705255ms] Mar 19 22:11:52.681: INFO: Created: latency-svc-csfxd Mar 19 22:11:52.686: INFO: Got endpoints: latency-svc-csfxd [789.464735ms] Mar 19 22:11:52.708: INFO: Created: latency-svc-w7vwf Mar 19 22:11:52.717: INFO: Got endpoints: latency-svc-w7vwf [783.783644ms] Mar 19 22:11:52.740: INFO: Created: latency-svc-hg6vn Mar 19 22:11:52.754: INFO: Got endpoints: latency-svc-hg6vn [701.429301ms] Mar 19 22:11:52.776: INFO: Created: latency-svc-smmrw Mar 19 22:11:52.824: INFO: Got endpoints: latency-svc-smmrw [765.271641ms] Mar 19 22:11:52.826: INFO: Created: latency-svc-th8qd Mar 19 22:11:52.832: INFO: Got endpoints: latency-svc-th8qd [722.131458ms] Mar 19 22:11:52.858: INFO: Created: latency-svc-lbdlc Mar 19 22:11:52.868: INFO: Got endpoints: latency-svc-lbdlc [641.988592ms] Mar 19 22:11:52.888: INFO: Created: latency-svc-q4m5w Mar 19 22:11:52.900: INFO: Got endpoints: latency-svc-q4m5w [665.695543ms] Mar 19 22:11:52.920: INFO: Created: latency-svc-8np9n Mar 19 22:11:52.962: INFO: Got endpoints: latency-svc-8np9n [697.227404ms] Mar 19 22:11:52.962: INFO: Latencies: [56.687896ms 91.482711ms 155.808194ms 194.703163ms 295.180595ms 307.674752ms 344.197038ms 418.231743ms 444.93246ms 488.686181ms 575.857313ms 584.981595ms 621.557076ms 641.988592ms 657.546537ms 665.695543ms 697.227404ms 701.429301ms 714.750857ms 715.415539ms 720.148928ms 721.10481ms 722.131458ms 724.252125ms 724.321124ms 725.191501ms 727.596023ms 729.363939ms 729.664005ms 729.884938ms 730.118204ms 730.241302ms 730.389389ms 731.074604ms 732.800393ms 734.649265ms 735.242413ms 735.565107ms 735.714488ms 736.038774ms 736.043211ms 736.481976ms 736.971783ms 738.928649ms 740.805654ms 741.513404ms 741.520338ms 742.209838ms 742.294838ms 742.901634ms 743.292416ms 743.482174ms 743.605437ms 744.069914ms 744.411346ms 744.929815ms 745.97417ms 746.895329ms 747.724745ms 748.59387ms 749.02658ms 751.613151ms 752.174685ms 753.391146ms 753.705255ms 753.820327ms 753.874549ms 754.326747ms 754.391857ms 754.432171ms 754.572198ms 755.176038ms 755.349629ms 757.18383ms 757.471586ms 759.381419ms 759.782989ms 759.859226ms 759.959283ms 760.023277ms 760.136426ms 760.267556ms 760.375079ms 760.705126ms 760.950032ms 762.337366ms 763.492674ms 764.305818ms 765.271641ms 765.715541ms 766.110627ms 766.184932ms 766.233555ms 767.907651ms 768.016147ms 768.029241ms 768.184705ms 768.886821ms 769.468456ms 770.865284ms 770.93118ms 771.257184ms 772.094078ms 772.260282ms 772.332374ms 772.69977ms 773.042614ms 773.902222ms 774.37487ms 774.624943ms 774.736652ms 775.094571ms 776.122168ms 776.391106ms 776.710813ms 776.90048ms 777.465187ms 777.751293ms 777.810822ms 778.732006ms 778.947098ms 779.177326ms 780.678741ms 782.597405ms 783.783644ms 784.214369ms 784.535117ms 784.689431ms 785.063613ms 786.048428ms 786.670605ms 789.291083ms 789.464735ms 790.043601ms 790.660644ms 791.051801ms 791.155013ms 791.837682ms 793.608143ms 794.846902ms 796.027333ms 796.23778ms 796.415701ms 796.566882ms 798.979165ms 799.949835ms 800.473646ms 801.283031ms 801.485ms 801.613877ms 802.942ms 803.488702ms 804.304872ms 805.118379ms 807.133143ms 807.196708ms 807.347833ms 807.674442ms 808.402149ms 808.46437ms 810.782033ms 812.67433ms 813.800628ms 814.120833ms 814.200211ms 818.073086ms 819.53009ms 824.070059ms 825.052501ms 825.126815ms 826.66992ms 828.367326ms 830.720954ms 831.464603ms 831.47107ms 832.080574ms 833.75438ms 835.326285ms 836.072104ms 837.145132ms 838.026816ms 838.385636ms 844.636667ms 845.565062ms 856.048105ms 865.732495ms 868.296855ms 869.909289ms 873.396596ms 880.689614ms 885.851145ms 886.261896ms 891.629866ms 898.748521ms 901.016198ms 901.96282ms 903.647476ms 904.683802ms 909.589733ms 914.551121ms] Mar 19 22:11:52.963: INFO: 50 %ile: 770.93118ms Mar 19 22:11:52.963: INFO: 90 %ile: 838.026816ms Mar 19 22:11:52.963: INFO: 99 %ile: 909.589733ms Mar 19 22:11:52.963: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:11:52.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1633" for this suite. • [SLOW TEST:13.677 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":238,"skipped":3911,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:11:52.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-1ea69535-0436-4294-8e07-a488c1948ade STEP: Creating a pod to test consume secrets Mar 19 22:11:53.038: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-32cee01a-6bc2-49b2-8423-da93c0dbd325" in namespace "projected-8166" to be "success or failure" Mar 19 22:11:53.042: INFO: Pod "pod-projected-secrets-32cee01a-6bc2-49b2-8423-da93c0dbd325": Phase="Pending", Reason="", readiness=false. Elapsed: 4.454ms Mar 19 22:11:55.046: INFO: Pod "pod-projected-secrets-32cee01a-6bc2-49b2-8423-da93c0dbd325": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007744462s Mar 19 22:11:57.050: INFO: Pod "pod-projected-secrets-32cee01a-6bc2-49b2-8423-da93c0dbd325": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01191155s STEP: Saw pod success Mar 19 22:11:57.050: INFO: Pod "pod-projected-secrets-32cee01a-6bc2-49b2-8423-da93c0dbd325" satisfied condition "success or failure" Mar 19 22:11:57.053: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-32cee01a-6bc2-49b2-8423-da93c0dbd325 container projected-secret-volume-test: STEP: delete the pod Mar 19 22:11:57.068: INFO: Waiting for pod pod-projected-secrets-32cee01a-6bc2-49b2-8423-da93c0dbd325 to disappear Mar 19 22:11:57.073: INFO: Pod pod-projected-secrets-32cee01a-6bc2-49b2-8423-da93c0dbd325 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:11:57.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8166" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3917,"failed":0} ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:11:57.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-12b18e50-f420-4d87-8102-b25532e59ddb STEP: Creating a pod to test consume configMaps Mar 19 22:11:57.153: INFO: Waiting up to 5m0s for pod "pod-configmaps-3321f391-ede7-4a4e-9bb3-50ed856bbc0b" in namespace "configmap-687" to be "success or failure" Mar 19 22:11:57.170: INFO: Pod "pod-configmaps-3321f391-ede7-4a4e-9bb3-50ed856bbc0b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.683306ms Mar 19 22:11:59.215: INFO: Pod "pod-configmaps-3321f391-ede7-4a4e-9bb3-50ed856bbc0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061673808s Mar 19 22:12:01.223: INFO: Pod "pod-configmaps-3321f391-ede7-4a4e-9bb3-50ed856bbc0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069646454s STEP: Saw pod success Mar 19 22:12:01.223: INFO: Pod "pod-configmaps-3321f391-ede7-4a4e-9bb3-50ed856bbc0b" satisfied condition "success or failure" Mar 19 22:12:01.228: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-3321f391-ede7-4a4e-9bb3-50ed856bbc0b container configmap-volume-test: STEP: delete the pod Mar 19 22:12:01.258: INFO: Waiting for pod pod-configmaps-3321f391-ede7-4a4e-9bb3-50ed856bbc0b to disappear Mar 19 22:12:01.316: INFO: Pod pod-configmaps-3321f391-ede7-4a4e-9bb3-50ed856bbc0b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:12:01.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-687" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3917,"failed":0} SSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:12:01.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 19 22:12:06.068: INFO: Successfully updated pod "adopt-release-pwh98" STEP: Checking that the Job readopts the Pod Mar 19 22:12:06.069: INFO: Waiting up to 15m0s for pod "adopt-release-pwh98" in namespace "job-4587" to be "adopted" Mar 19 22:12:06.093: INFO: Pod "adopt-release-pwh98": Phase="Running", Reason="", readiness=true. Elapsed: 24.474045ms Mar 19 22:12:08.128: INFO: Pod "adopt-release-pwh98": Phase="Running", Reason="", readiness=true. Elapsed: 2.059069601s Mar 19 22:12:08.128: INFO: Pod "adopt-release-pwh98" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 19 22:12:08.647: INFO: Successfully updated pod "adopt-release-pwh98" STEP: Checking that the Job releases the Pod Mar 19 22:12:08.647: INFO: Waiting up to 15m0s for pod "adopt-release-pwh98" in namespace "job-4587" to be "released" Mar 19 22:12:08.655: INFO: Pod "adopt-release-pwh98": Phase="Running", Reason="", readiness=true. Elapsed: 8.193817ms Mar 19 22:12:08.655: INFO: Pod "adopt-release-pwh98" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:12:08.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4587" for this suite. • [SLOW TEST:7.406 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":241,"skipped":3921,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:12:08.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-7c7f3a54-c863-42e6-b4c3-7f573cf47f71 STEP: Creating a pod to test consume secrets Mar 19 22:12:08.934: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f00e6ca6-dcf0-4bbe-a400-9411f240e73b" in namespace "projected-4052" to be "success or failure" Mar 19 22:12:08.959: INFO: Pod "pod-projected-secrets-f00e6ca6-dcf0-4bbe-a400-9411f240e73b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.493638ms Mar 19 22:12:10.969: INFO: Pod "pod-projected-secrets-f00e6ca6-dcf0-4bbe-a400-9411f240e73b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034865772s Mar 19 22:12:12.992: INFO: Pod "pod-projected-secrets-f00e6ca6-dcf0-4bbe-a400-9411f240e73b": Phase="Running", Reason="", readiness=true. Elapsed: 4.058597971s Mar 19 22:12:15.007: INFO: Pod "pod-projected-secrets-f00e6ca6-dcf0-4bbe-a400-9411f240e73b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072872073s STEP: Saw pod success Mar 19 22:12:15.007: INFO: Pod "pod-projected-secrets-f00e6ca6-dcf0-4bbe-a400-9411f240e73b" satisfied condition "success or failure" Mar 19 22:12:15.009: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-f00e6ca6-dcf0-4bbe-a400-9411f240e73b container projected-secret-volume-test: STEP: delete the pod Mar 19 22:12:15.119: INFO: Waiting for pod pod-projected-secrets-f00e6ca6-dcf0-4bbe-a400-9411f240e73b to disappear Mar 19 22:12:15.135: INFO: Pod pod-projected-secrets-f00e6ca6-dcf0-4bbe-a400-9411f240e73b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:12:15.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4052" for this suite. • [SLOW TEST:6.379 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3923,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:12:15.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 19 22:12:15.326: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1833 /api/v1/namespaces/watch-1833/configmaps/e2e-watch-test-watch-closed a40790e9-0b95-4794-8bdb-f7cca797d16e 1132329 0 2020-03-19 22:12:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 19 22:12:15.327: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1833 /api/v1/namespaces/watch-1833/configmaps/e2e-watch-test-watch-closed a40790e9-0b95-4794-8bdb-f7cca797d16e 1132331 0 2020-03-19 22:12:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 19 22:12:15.366: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1833 /api/v1/namespaces/watch-1833/configmaps/e2e-watch-test-watch-closed a40790e9-0b95-4794-8bdb-f7cca797d16e 1132333 0 2020-03-19 22:12:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 19 22:12:15.366: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1833 /api/v1/namespaces/watch-1833/configmaps/e2e-watch-test-watch-closed a40790e9-0b95-4794-8bdb-f7cca797d16e 1132334 0 2020-03-19 22:12:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:12:15.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1833" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":243,"skipped":3925,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:12:15.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 22:12:15.652: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 19 22:12:15.684: INFO: Number of nodes with available pods: 0 Mar 19 22:12:15.684: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 19 22:12:15.821: INFO: Number of nodes with available pods: 0 Mar 19 22:12:15.821: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:12:16.910: INFO: Number of nodes with available pods: 0 Mar 19 22:12:16.910: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:12:17.857: INFO: Number of nodes with available pods: 0 Mar 19 22:12:17.857: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:12:18.841: INFO: Number of nodes with available pods: 1 Mar 19 22:12:18.841: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 19 22:12:18.914: INFO: Number of nodes with available pods: 1 Mar 19 22:12:18.914: INFO: Number of running nodes: 0, number of available pods: 1 Mar 19 22:12:19.957: INFO: Number of nodes with available pods: 0 Mar 19 22:12:19.957: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 19 22:12:20.024: INFO: Number of nodes with available pods: 0 Mar 19 22:12:20.024: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:12:21.027: INFO: Number of nodes with available pods: 0 Mar 19 22:12:21.027: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:12:22.028: INFO: Number of nodes with available pods: 0 Mar 19 22:12:22.028: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:12:23.028: INFO: Number of nodes with available pods: 0 Mar 19 22:12:23.028: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:12:24.028: INFO: Number of nodes with available pods: 0 Mar 19 22:12:24.028: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:12:25.028: INFO: Number of nodes with available pods: 0 Mar 19 22:12:25.028: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:12:26.028: INFO: Number of nodes with available pods: 0 Mar 19 22:12:26.028: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:12:27.028: INFO: Number of nodes with available pods: 0 Mar 19 22:12:27.028: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:12:28.028: INFO: Number of nodes with available pods: 0 Mar 19 22:12:28.028: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:12:29.028: INFO: Number of nodes with available pods: 0 Mar 19 22:12:29.028: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:12:30.028: INFO: Number of nodes with available pods: 0 Mar 19 22:12:30.028: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:12:31.027: INFO: Number of nodes with available pods: 0 Mar 19 22:12:31.027: INFO: Node jerma-worker is running more than one daemon pod Mar 19 22:12:32.028: INFO: Number of nodes with available pods: 1 Mar 19 22:12:32.028: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-260, will wait for the garbage collector to delete the pods Mar 19 22:12:32.093: INFO: Deleting DaemonSet.extensions daemon-set took: 6.398802ms Mar 19 22:12:32.393: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.199663ms Mar 19 22:12:35.708: INFO: Number of nodes with available pods: 0 Mar 19 22:12:35.708: INFO: Number of running nodes: 0, number of available pods: 0 Mar 19 22:12:35.710: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-260/daemonsets","resourceVersion":"1132595"},"items":null} Mar 19 22:12:35.713: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-260/pods","resourceVersion":"1132595"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:12:35.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-260" for this suite. • [SLOW TEST:20.360 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":244,"skipped":3926,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:12:35.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 19 22:12:36.542: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 19 22:12:38.552: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252756, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252756, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252756, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252756, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 22:12:41.577: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:12:51.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7126" for this suite. STEP: Destroying namespace "webhook-7126-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.031 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":245,"skipped":3948,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:12:51.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 19 22:12:51.891: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef5451c9-2176-4889-9cc1-6ff72f254916" in namespace "downward-api-7034" to be "success or failure" Mar 19 22:12:51.895: INFO: Pod "downwardapi-volume-ef5451c9-2176-4889-9cc1-6ff72f254916": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274226ms Mar 19 22:12:53.933: INFO: Pod "downwardapi-volume-ef5451c9-2176-4889-9cc1-6ff72f254916": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04242894s Mar 19 22:12:55.944: INFO: Pod "downwardapi-volume-ef5451c9-2176-4889-9cc1-6ff72f254916": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052501931s STEP: Saw pod success Mar 19 22:12:55.944: INFO: Pod "downwardapi-volume-ef5451c9-2176-4889-9cc1-6ff72f254916" satisfied condition "success or failure" Mar 19 22:12:55.945: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-ef5451c9-2176-4889-9cc1-6ff72f254916 container client-container: STEP: delete the pod Mar 19 22:12:55.982: INFO: Waiting for pod downwardapi-volume-ef5451c9-2176-4889-9cc1-6ff72f254916 to disappear Mar 19 22:12:55.992: INFO: Pod downwardapi-volume-ef5451c9-2176-4889-9cc1-6ff72f254916 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:12:55.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7034" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":3954,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:12:55.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 19 22:12:56.400: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 19 22:12:58.423: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252776, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252776, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252776, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252776, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 22:13:01.496: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 22:13:01.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:13:02.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4766" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.764 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":247,"skipped":3966,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:13:02.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5134 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5134 STEP: creating replication controller externalsvc in namespace services-5134 I0319 22:13:02.999017 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5134, replica count: 2 I0319 22:13:06.049533 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0319 22:13:09.049768 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 19 22:13:09.082: INFO: Creating new exec pod Mar 19 22:13:13.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5134 execpod7wqmx -- /bin/sh -x -c nslookup clusterip-service' Mar 19 22:13:15.654: INFO: stderr: "I0319 22:13:15.572232 3792 log.go:172] (0xc0009e4b00) (0xc0008ee140) Create stream\nI0319 22:13:15.572266 3792 log.go:172] (0xc0009e4b00) (0xc0008ee140) Stream added, broadcasting: 1\nI0319 22:13:15.574935 3792 log.go:172] (0xc0009e4b00) Reply frame received for 1\nI0319 22:13:15.574980 3792 log.go:172] (0xc0009e4b00) (0xc00091e0a0) Create stream\nI0319 22:13:15.574994 3792 log.go:172] (0xc0009e4b00) (0xc00091e0a0) Stream added, broadcasting: 3\nI0319 22:13:15.576212 3792 log.go:172] (0xc0009e4b00) Reply frame received for 3\nI0319 22:13:15.576254 3792 log.go:172] (0xc0009e4b00) (0xc000866000) Create stream\nI0319 22:13:15.576265 3792 log.go:172] (0xc0009e4b00) (0xc000866000) Stream added, broadcasting: 5\nI0319 22:13:15.577537 3792 log.go:172] (0xc0009e4b00) Reply frame received for 5\nI0319 22:13:15.636066 3792 log.go:172] (0xc0009e4b00) Data frame received for 5\nI0319 22:13:15.636095 3792 log.go:172] (0xc000866000) (5) Data frame handling\nI0319 22:13:15.636109 3792 log.go:172] (0xc000866000) (5) Data frame sent\n+ nslookup clusterip-service\nI0319 22:13:15.645999 3792 log.go:172] (0xc0009e4b00) Data frame received for 3\nI0319 22:13:15.646028 3792 log.go:172] (0xc00091e0a0) (3) Data frame handling\nI0319 22:13:15.646053 3792 log.go:172] (0xc00091e0a0) (3) Data frame sent\nI0319 22:13:15.647206 3792 log.go:172] (0xc0009e4b00) Data frame received for 3\nI0319 22:13:15.647240 3792 log.go:172] (0xc00091e0a0) (3) Data frame handling\nI0319 22:13:15.647264 3792 log.go:172] (0xc00091e0a0) (3) Data frame sent\nI0319 22:13:15.647554 3792 log.go:172] (0xc0009e4b00) Data frame received for 5\nI0319 22:13:15.647574 3792 log.go:172] (0xc000866000) (5) Data frame handling\nI0319 22:13:15.647594 3792 log.go:172] (0xc0009e4b00) Data frame received for 3\nI0319 22:13:15.647608 3792 log.go:172] (0xc00091e0a0) (3) Data frame handling\nI0319 22:13:15.649838 3792 log.go:172] (0xc0009e4b00) Data frame received for 1\nI0319 22:13:15.649859 3792 log.go:172] (0xc0008ee140) (1) Data frame handling\nI0319 22:13:15.649872 3792 log.go:172] (0xc0008ee140) (1) Data frame sent\nI0319 22:13:15.649889 3792 log.go:172] (0xc0009e4b00) (0xc0008ee140) Stream removed, broadcasting: 1\nI0319 22:13:15.649930 3792 log.go:172] (0xc0009e4b00) Go away received\nI0319 22:13:15.650259 3792 log.go:172] (0xc0009e4b00) (0xc0008ee140) Stream removed, broadcasting: 1\nI0319 22:13:15.650283 3792 log.go:172] (0xc0009e4b00) (0xc00091e0a0) Stream removed, broadcasting: 3\nI0319 22:13:15.650295 3792 log.go:172] (0xc0009e4b00) (0xc000866000) Stream removed, broadcasting: 5\n" Mar 19 22:13:15.654: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5134.svc.cluster.local\tcanonical name = externalsvc.services-5134.svc.cluster.local.\nName:\texternalsvc.services-5134.svc.cluster.local\nAddress: 10.109.148.71\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5134, will wait for the garbage collector to delete the pods Mar 19 22:13:15.713: INFO: Deleting ReplicationController externalsvc took: 5.182482ms Mar 19 22:13:15.813: INFO: Terminating ReplicationController externalsvc pods took: 100.269498ms Mar 19 22:13:29.529: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:13:29.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5134" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:26.849 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":248,"skipped":3985,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:13:29.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 19 22:13:29.672: INFO: Waiting up to 5m0s for pod "pod-25c93f53-78d9-4121-8c32-4ad22d8b4bd2" in namespace "emptydir-892" to be "success or failure" Mar 19 22:13:29.676: INFO: Pod "pod-25c93f53-78d9-4121-8c32-4ad22d8b4bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.664152ms Mar 19 22:13:31.680: INFO: Pod "pod-25c93f53-78d9-4121-8c32-4ad22d8b4bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007603941s Mar 19 22:13:33.683: INFO: Pod "pod-25c93f53-78d9-4121-8c32-4ad22d8b4bd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010998998s STEP: Saw pod success Mar 19 22:13:33.683: INFO: Pod "pod-25c93f53-78d9-4121-8c32-4ad22d8b4bd2" satisfied condition "success or failure" Mar 19 22:13:33.686: INFO: Trying to get logs from node jerma-worker pod pod-25c93f53-78d9-4121-8c32-4ad22d8b4bd2 container test-container: STEP: delete the pod Mar 19 22:13:33.726: INFO: Waiting for pod pod-25c93f53-78d9-4121-8c32-4ad22d8b4bd2 to disappear Mar 19 22:13:33.736: INFO: Pod pod-25c93f53-78d9-4121-8c32-4ad22d8b4bd2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:13:33.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-892" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":3992,"failed":0} ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:13:33.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 19 22:13:33.870: INFO: Waiting up to 5m0s for pod "downward-api-1ddd7e84-3e6b-4436-8e31-c95c5014add2" in namespace "downward-api-4439" to be "success or failure" Mar 19 22:13:33.880: INFO: Pod "downward-api-1ddd7e84-3e6b-4436-8e31-c95c5014add2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.010434ms Mar 19 22:13:35.983: INFO: Pod "downward-api-1ddd7e84-3e6b-4436-8e31-c95c5014add2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113116383s Mar 19 22:13:37.987: INFO: Pod "downward-api-1ddd7e84-3e6b-4436-8e31-c95c5014add2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116735389s STEP: Saw pod success Mar 19 22:13:37.987: INFO: Pod "downward-api-1ddd7e84-3e6b-4436-8e31-c95c5014add2" satisfied condition "success or failure" Mar 19 22:13:37.990: INFO: Trying to get logs from node jerma-worker2 pod downward-api-1ddd7e84-3e6b-4436-8e31-c95c5014add2 container dapi-container: STEP: delete the pod Mar 19 22:13:38.021: INFO: Waiting for pod downward-api-1ddd7e84-3e6b-4436-8e31-c95c5014add2 to disappear Mar 19 22:13:38.030: INFO: Pod downward-api-1ddd7e84-3e6b-4436-8e31-c95c5014add2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:13:38.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4439" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":3992,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:13:38.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-e3f9b2f6-6b65-4530-a299-7e7880fdd856 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:13:38.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8556" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":251,"skipped":4010,"failed":0} SS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:13:38.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 19 22:13:42.765: INFO: Successfully updated pod "pod-update-05246dc7-e325-4cf5-b7bb-979b1f3058c7" STEP: verifying the updated pod is in kubernetes Mar 19 22:13:42.774: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:13:42.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2621" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4012,"failed":0} S ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:13:42.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 19 22:13:42.857: INFO: Created pod &Pod{ObjectMeta:{dns-4344 dns-4344 /api/v1/namespaces/dns-4344/pods/dns-4344 2f7e0bc3-cd50-4c88-b4ae-2ef05a173a2c 1133127 0 2020-03-19 22:13:42 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-flns9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-flns9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-flns9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Mar 19 22:13:46.866: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4344 PodName:dns-4344 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 22:13:46.866: INFO: >>> kubeConfig: /root/.kube/config I0319 22:13:46.902429 6 log.go:172] (0xc00165c630) (0xc0009921e0) Create stream I0319 22:13:46.902468 6 log.go:172] (0xc00165c630) (0xc0009921e0) Stream added, broadcasting: 1 I0319 22:13:46.908107 6 log.go:172] (0xc00165c630) Reply frame received for 1 I0319 22:13:46.908154 6 log.go:172] (0xc00165c630) (0xc000992460) Create stream I0319 22:13:46.908169 6 log.go:172] (0xc00165c630) (0xc000992460) Stream added, broadcasting: 3 I0319 22:13:46.910141 6 log.go:172] (0xc00165c630) Reply frame received for 3 I0319 22:13:46.910170 6 log.go:172] (0xc00165c630) (0xc0014bcf00) Create stream I0319 22:13:46.910181 6 log.go:172] (0xc00165c630) (0xc0014bcf00) Stream added, broadcasting: 5 I0319 22:13:46.910864 6 log.go:172] (0xc00165c630) Reply frame received for 5 I0319 22:13:47.012359 6 log.go:172] (0xc00165c630) Data frame received for 3 I0319 22:13:47.012406 6 log.go:172] (0xc000992460) (3) Data frame handling I0319 22:13:47.012532 6 log.go:172] (0xc000992460) (3) Data frame sent I0319 22:13:47.013075 6 log.go:172] (0xc00165c630) Data frame received for 3 I0319 22:13:47.013255 6 log.go:172] (0xc000992460) (3) Data frame handling I0319 22:13:47.013311 6 log.go:172] (0xc00165c630) Data frame received for 5 I0319 22:13:47.013338 6 log.go:172] (0xc0014bcf00) (5) Data frame handling I0319 22:13:47.014982 6 log.go:172] (0xc00165c630) Data frame received for 1 I0319 22:13:47.015014 6 log.go:172] (0xc0009921e0) (1) Data frame handling I0319 22:13:47.015038 6 log.go:172] (0xc0009921e0) (1) Data frame sent I0319 22:13:47.015056 6 log.go:172] (0xc00165c630) (0xc0009921e0) Stream removed, broadcasting: 1 I0319 22:13:47.015083 6 log.go:172] (0xc00165c630) Go away received I0319 22:13:47.015277 6 log.go:172] (0xc00165c630) (0xc0009921e0) Stream removed, broadcasting: 1 I0319 22:13:47.015318 6 log.go:172] (0xc00165c630) (0xc000992460) Stream removed, broadcasting: 3 I0319 22:13:47.015341 6 log.go:172] (0xc00165c630) (0xc0014bcf00) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 19 22:13:47.015: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4344 PodName:dns-4344 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 19 22:13:47.015: INFO: >>> kubeConfig: /root/.kube/config I0319 22:13:47.046425 6 log.go:172] (0xc002104630) (0xc00234ee60) Create stream I0319 22:13:47.046453 6 log.go:172] (0xc002104630) (0xc00234ee60) Stream added, broadcasting: 1 I0319 22:13:47.048490 6 log.go:172] (0xc002104630) Reply frame received for 1 I0319 22:13:47.048523 6 log.go:172] (0xc002104630) (0xc0023e6e60) Create stream I0319 22:13:47.048534 6 log.go:172] (0xc002104630) (0xc0023e6e60) Stream added, broadcasting: 3 I0319 22:13:47.049636 6 log.go:172] (0xc002104630) Reply frame received for 3 I0319 22:13:47.049691 6 log.go:172] (0xc002104630) (0xc0014bd0e0) Create stream I0319 22:13:47.049709 6 log.go:172] (0xc002104630) (0xc0014bd0e0) Stream added, broadcasting: 5 I0319 22:13:47.050663 6 log.go:172] (0xc002104630) Reply frame received for 5 I0319 22:13:47.103343 6 log.go:172] (0xc002104630) Data frame received for 3 I0319 22:13:47.103375 6 log.go:172] (0xc0023e6e60) (3) Data frame handling I0319 22:13:47.103393 6 log.go:172] (0xc0023e6e60) (3) Data frame sent I0319 22:13:47.103950 6 log.go:172] (0xc002104630) Data frame received for 3 I0319 22:13:47.103999 6 log.go:172] (0xc0023e6e60) (3) Data frame handling I0319 22:13:47.104034 6 log.go:172] (0xc002104630) Data frame received for 5 I0319 22:13:47.104055 6 log.go:172] (0xc0014bd0e0) (5) Data frame handling I0319 22:13:47.105670 6 log.go:172] (0xc002104630) Data frame received for 1 I0319 22:13:47.105692 6 log.go:172] (0xc00234ee60) (1) Data frame handling I0319 22:13:47.105711 6 log.go:172] (0xc00234ee60) (1) Data frame sent I0319 22:13:47.105727 6 log.go:172] (0xc002104630) (0xc00234ee60) Stream removed, broadcasting: 1 I0319 22:13:47.105752 6 log.go:172] (0xc002104630) Go away received I0319 22:13:47.105805 6 log.go:172] (0xc002104630) (0xc00234ee60) Stream removed, broadcasting: 1 I0319 22:13:47.105828 6 log.go:172] (0xc002104630) (0xc0023e6e60) Stream removed, broadcasting: 3 I0319 22:13:47.105845 6 log.go:172] (0xc002104630) (0xc0014bd0e0) Stream removed, broadcasting: 5 Mar 19 22:13:47.105: INFO: Deleting pod dns-4344... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:13:47.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4344" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":253,"skipped":4013,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:13:47.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 19 22:13:47.229: INFO: Waiting up to 5m0s for pod "pod-5b1048d0-552d-48d1-a6e7-b82a12df5ab3" in namespace "emptydir-9873" to be "success or failure" Mar 19 22:13:47.379: INFO: Pod "pod-5b1048d0-552d-48d1-a6e7-b82a12df5ab3": Phase="Pending", Reason="", readiness=false. Elapsed: 150.598186ms Mar 19 22:13:49.383: INFO: Pod "pod-5b1048d0-552d-48d1-a6e7-b82a12df5ab3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154598706s Mar 19 22:13:51.388: INFO: Pod "pod-5b1048d0-552d-48d1-a6e7-b82a12df5ab3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158936037s STEP: Saw pod success Mar 19 22:13:51.388: INFO: Pod "pod-5b1048d0-552d-48d1-a6e7-b82a12df5ab3" satisfied condition "success or failure" Mar 19 22:13:51.391: INFO: Trying to get logs from node jerma-worker2 pod pod-5b1048d0-552d-48d1-a6e7-b82a12df5ab3 container test-container: STEP: delete the pod Mar 19 22:13:51.420: INFO: Waiting for pod pod-5b1048d0-552d-48d1-a6e7-b82a12df5ab3 to disappear Mar 19 22:13:51.441: INFO: Pod pod-5b1048d0-552d-48d1-a6e7-b82a12df5ab3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:13:51.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9873" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4021,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:13:51.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1464 STEP: creating an pod Mar 19 22:13:51.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-152 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 19 22:13:51.594: INFO: stderr: "" Mar 19 22:13:51.594: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Mar 19 22:13:51.594: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 19 22:13:51.594: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-152" to be "running and ready, or succeeded" Mar 19 22:13:51.598: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172177ms Mar 19 22:13:53.623: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028259468s Mar 19 22:13:55.627: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.032689512s Mar 19 22:13:55.627: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 19 22:13:55.627: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 19 22:13:55.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-152' Mar 19 22:13:55.731: INFO: stderr: "" Mar 19 22:13:55.731: INFO: stdout: "I0319 22:13:53.739884 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/b8f 541\nI0319 22:13:53.940185 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/txn 505\nI0319 22:13:54.140069 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/phj 500\nI0319 22:13:54.340100 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/966r 326\nI0319 22:13:54.540049 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/pl7 356\nI0319 22:13:54.740121 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/m5w6 282\nI0319 22:13:54.940082 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/wxjj 358\nI0319 22:13:55.140133 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/phs7 440\nI0319 22:13:55.340069 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/xzf 279\nI0319 22:13:55.540077 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/cc5r 571\n" STEP: limiting log lines Mar 19 22:13:55.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-152 --tail=1' Mar 19 22:13:55.840: INFO: stderr: "" Mar 19 22:13:55.840: INFO: stdout: "I0319 22:13:55.740049 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/rf4 544\n" Mar 19 22:13:55.840: INFO: got output "I0319 22:13:55.740049 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/rf4 544\n" STEP: limiting log bytes Mar 19 22:13:55.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-152 --limit-bytes=1' Mar 19 22:13:55.949: INFO: stderr: "" Mar 19 22:13:55.949: INFO: stdout: "I" Mar 19 22:13:55.949: INFO: got output "I" STEP: exposing timestamps Mar 19 22:13:55.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-152 --tail=1 --timestamps' Mar 19 22:13:56.054: INFO: stderr: "" Mar 19 22:13:56.055: INFO: stdout: "2020-03-19T22:13:55.940205266Z I0319 22:13:55.940039 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/7cll 519\n" Mar 19 22:13:56.055: INFO: got output "2020-03-19T22:13:55.940205266Z I0319 22:13:55.940039 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/7cll 519\n" STEP: restricting to a time range Mar 19 22:13:58.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-152 --since=1s' Mar 19 22:13:58.674: INFO: stderr: "" Mar 19 22:13:58.675: INFO: stdout: "I0319 22:13:57.740065 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/2s22 248\nI0319 22:13:57.940083 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/zctq 536\nI0319 22:13:58.140115 1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/7lc 427\nI0319 22:13:58.340038 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/9bbr 263\nI0319 22:13:58.540108 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/cbd2 271\n" Mar 19 22:13:58.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-152 --since=24h' Mar 19 22:13:58.790: INFO: stderr: "" Mar 19 22:13:58.790: INFO: stdout: "I0319 22:13:53.739884 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/b8f 541\nI0319 22:13:53.940185 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/txn 505\nI0319 22:13:54.140069 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/phj 500\nI0319 22:13:54.340100 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/966r 326\nI0319 22:13:54.540049 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/pl7 356\nI0319 22:13:54.740121 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/m5w6 282\nI0319 22:13:54.940082 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/wxjj 358\nI0319 22:13:55.140133 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/phs7 440\nI0319 22:13:55.340069 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/xzf 279\nI0319 22:13:55.540077 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/cc5r 571\nI0319 22:13:55.740049 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/rf4 544\nI0319 22:13:55.940039 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/7cll 519\nI0319 22:13:56.140099 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/n2rb 298\nI0319 22:13:56.340097 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/smb 231\nI0319 22:13:56.540049 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/k7pq 299\nI0319 22:13:56.740082 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/4dr 339\nI0319 22:13:56.940061 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/mvd 452\nI0319 22:13:57.140125 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/sf5 237\nI0319 22:13:57.340039 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/whx 487\nI0319 22:13:57.540099 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/7lqj 311\nI0319 22:13:57.740065 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/2s22 248\nI0319 22:13:57.940083 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/zctq 536\nI0319 22:13:58.140115 1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/7lc 427\nI0319 22:13:58.340038 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/9bbr 263\nI0319 22:13:58.540108 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/cbd2 271\nI0319 22:13:58.740020 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/default/pods/rj4 390\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1470 Mar 19 22:13:58.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-152' Mar 19 22:14:09.233: INFO: stderr: "" Mar 19 22:14:09.233: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:14:09.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-152" for this suite. • [SLOW TEST:17.810 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":255,"skipped":4069,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:14:09.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 19 22:14:15.899: INFO: Successfully updated pod "annotationupdatec5e4f6a0-7c19-4f37-8ffd-29b1ddcbbf01" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:14:19.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1746" for this suite. • [SLOW TEST:10.670 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4078,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:14:19.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-334 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-334 I0319 22:14:20.127277 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-334, replica count: 2 I0319 22:14:23.177690 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0319 22:14:26.177939 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0319 22:14:29.178114 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 19 22:14:29.178: INFO: Creating new exec pod Mar 19 22:14:36.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-334 execpodpwgvq -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 19 22:14:36.414: INFO: stderr: "I0319 22:14:36.313026 3989 log.go:172] (0xc000a44d10) (0xc0008e8320) Create stream\nI0319 22:14:36.313082 3989 log.go:172] (0xc000a44d10) (0xc0008e8320) Stream added, broadcasting: 1\nI0319 22:14:36.315648 3989 log.go:172] (0xc000a44d10) Reply frame received for 1\nI0319 22:14:36.315721 3989 log.go:172] (0xc000a44d10) (0xc000a82640) Create stream\nI0319 22:14:36.316387 3989 log.go:172] (0xc000a44d10) (0xc000a82640) Stream added, broadcasting: 3\nI0319 22:14:36.318256 3989 log.go:172] (0xc000a44d10) Reply frame received for 3\nI0319 22:14:36.318287 3989 log.go:172] (0xc000a44d10) (0xc000a82000) Create stream\nI0319 22:14:36.318299 3989 log.go:172] (0xc000a44d10) (0xc000a82000) Stream added, broadcasting: 5\nI0319 22:14:36.319196 3989 log.go:172] (0xc000a44d10) Reply frame received for 5\nI0319 22:14:36.407801 3989 log.go:172] (0xc000a44d10) Data frame received for 3\nI0319 22:14:36.407839 3989 log.go:172] (0xc000a82640) (3) Data frame handling\nI0319 22:14:36.407862 3989 log.go:172] (0xc000a44d10) Data frame received for 5\nI0319 22:14:36.407871 3989 log.go:172] (0xc000a82000) (5) Data frame handling\nI0319 22:14:36.407882 3989 log.go:172] (0xc000a82000) (5) Data frame sent\nI0319 22:14:36.407891 3989 log.go:172] (0xc000a44d10) Data frame received for 5\nI0319 22:14:36.407898 3989 log.go:172] (0xc000a82000) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0319 22:14:36.409747 3989 log.go:172] (0xc000a44d10) Data frame received for 1\nI0319 22:14:36.409776 3989 log.go:172] (0xc0008e8320) (1) Data frame handling\nI0319 22:14:36.409799 3989 log.go:172] (0xc0008e8320) (1) Data frame sent\nI0319 22:14:36.409814 3989 log.go:172] (0xc000a44d10) (0xc0008e8320) Stream removed, broadcasting: 1\nI0319 22:14:36.409934 3989 log.go:172] (0xc000a44d10) Go away received\nI0319 22:14:36.410096 3989 log.go:172] (0xc000a44d10) (0xc0008e8320) Stream removed, broadcasting: 1\nI0319 22:14:36.410114 3989 log.go:172] (0xc000a44d10) (0xc000a82640) Stream removed, broadcasting: 3\nI0319 22:14:36.410124 3989 log.go:172] (0xc000a44d10) (0xc000a82000) Stream removed, broadcasting: 5\n" Mar 19 22:14:36.414: INFO: stdout: "" Mar 19 22:14:36.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-334 execpodpwgvq -- /bin/sh -x -c nc -zv -t -w 2 10.111.56.81 80' Mar 19 22:14:36.615: INFO: stderr: "I0319 22:14:36.549811 4012 log.go:172] (0xc00092c0b0) (0xc0007200a0) Create stream\nI0319 22:14:36.549880 4012 log.go:172] (0xc00092c0b0) (0xc0007200a0) Stream added, broadcasting: 1\nI0319 22:14:36.551620 4012 log.go:172] (0xc00092c0b0) Reply frame received for 1\nI0319 22:14:36.551682 4012 log.go:172] (0xc00092c0b0) (0xc000728dc0) Create stream\nI0319 22:14:36.551707 4012 log.go:172] (0xc00092c0b0) (0xc000728dc0) Stream added, broadcasting: 3\nI0319 22:14:36.552652 4012 log.go:172] (0xc00092c0b0) Reply frame received for 3\nI0319 22:14:36.552689 4012 log.go:172] (0xc00092c0b0) (0xc000728e60) Create stream\nI0319 22:14:36.552698 4012 log.go:172] (0xc00092c0b0) (0xc000728e60) Stream added, broadcasting: 5\nI0319 22:14:36.553602 4012 log.go:172] (0xc00092c0b0) Reply frame received for 5\nI0319 22:14:36.609309 4012 log.go:172] (0xc00092c0b0) Data frame received for 3\nI0319 22:14:36.609335 4012 log.go:172] (0xc000728dc0) (3) Data frame handling\nI0319 22:14:36.609378 4012 log.go:172] (0xc00092c0b0) Data frame received for 5\nI0319 22:14:36.609387 4012 log.go:172] (0xc000728e60) (5) Data frame handling\nI0319 22:14:36.609402 4012 log.go:172] (0xc000728e60) (5) Data frame sent\nI0319 22:14:36.609410 4012 log.go:172] (0xc00092c0b0) Data frame received for 5\n+ nc -zv -t -w 2 10.111.56.81 80\nConnection to 10.111.56.81 80 port [tcp/http] succeeded!\nI0319 22:14:36.609418 4012 log.go:172] (0xc000728e60) (5) Data frame handling\nI0319 22:14:36.610702 4012 log.go:172] (0xc00092c0b0) Data frame received for 1\nI0319 22:14:36.610728 4012 log.go:172] (0xc0007200a0) (1) Data frame handling\nI0319 22:14:36.610747 4012 log.go:172] (0xc0007200a0) (1) Data frame sent\nI0319 22:14:36.610776 4012 log.go:172] (0xc00092c0b0) (0xc0007200a0) Stream removed, broadcasting: 1\nI0319 22:14:36.610796 4012 log.go:172] (0xc00092c0b0) Go away received\nI0319 22:14:36.611259 4012 log.go:172] (0xc00092c0b0) (0xc0007200a0) Stream removed, broadcasting: 1\nI0319 22:14:36.611279 4012 log.go:172] (0xc00092c0b0) (0xc000728dc0) Stream removed, broadcasting: 3\nI0319 22:14:36.611288 4012 log.go:172] (0xc00092c0b0) (0xc000728e60) Stream removed, broadcasting: 5\n" Mar 19 22:14:36.615: INFO: stdout: "" Mar 19 22:14:36.615: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:14:36.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-334" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:16.717 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":257,"skipped":4104,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:14:36.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 19 22:14:37.415: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 19 22:14:39.425: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252877, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252877, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252877, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252877, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 22:14:42.526: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:14:42.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4128" for this suite. STEP: Destroying namespace "webhook-4128-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.315 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":258,"skipped":4134,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:14:42.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 19 22:14:43.034: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 19 22:14:43.123: INFO: Waiting for terminating namespaces to be deleted... Mar 19 22:14:43.134: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 19 22:14:43.139: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 22:14:43.139: INFO: Container kindnet-cni ready: true, restart count 0 Mar 19 22:14:43.139: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 22:14:43.139: INFO: Container kube-proxy ready: true, restart count 0 Mar 19 22:14:43.139: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 19 22:14:43.149: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 22:14:43.149: INFO: Container kindnet-cni ready: true, restart count 0 Mar 19 22:14:43.149: INFO: sample-webhook-deployment-5f65f8c764-gsr7z from webhook-4128 started at 2020-03-19 22:14:37 +0000 UTC (1 container statuses recorded) Mar 19 22:14:43.149: INFO: Container sample-webhook ready: true, restart count 0 Mar 19 22:14:43.149: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 19 22:14:43.149: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Mar 19 22:14:43.204: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Mar 19 22:14:43.204: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Mar 19 22:14:43.204: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Mar 19 22:14:43.204: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 Mar 19 22:14:43.204: INFO: Pod sample-webhook-deployment-5f65f8c764-gsr7z requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Mar 19 22:14:43.204: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Mar 19 22:14:43.236: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-15eb52ca-1b89-4d62-ac47-2ced823558bd.15fdd44fd5883969], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1957/filler-pod-15eb52ca-1b89-4d62-ac47-2ced823558bd to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-15eb52ca-1b89-4d62-ac47-2ced823558bd.15fdd45031fe4b3c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-15eb52ca-1b89-4d62-ac47-2ced823558bd.15fdd4507733e7b9], Reason = [Created], Message = [Created container filler-pod-15eb52ca-1b89-4d62-ac47-2ced823558bd] STEP: Considering event: Type = [Normal], Name = [filler-pod-15eb52ca-1b89-4d62-ac47-2ced823558bd.15fdd45089abe0bf], Reason = [Started], Message = [Started container filler-pod-15eb52ca-1b89-4d62-ac47-2ced823558bd] STEP: Considering event: Type = [Normal], Name = [filler-pod-d8316eca-d3d0-41b2-8dc0-e9f107207e30.15fdd44fd6453fda], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1957/filler-pod-d8316eca-d3d0-41b2-8dc0-e9f107207e30 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-d8316eca-d3d0-41b2-8dc0-e9f107207e30.15fdd45057e82ca6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d8316eca-d3d0-41b2-8dc0-e9f107207e30.15fdd45083c4d789], Reason = [Created], Message = [Created container filler-pod-d8316eca-d3d0-41b2-8dc0-e9f107207e30] STEP: Considering event: Type = [Normal], Name = [filler-pod-d8316eca-d3d0-41b2-8dc0-e9f107207e30.15fdd450921f0318], Reason = [Started], Message = [Started container filler-pod-d8316eca-d3d0-41b2-8dc0-e9f107207e30] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fdd450c6677de6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:14:48.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1957" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.412 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":259,"skipped":4148,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:14:48.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 19 22:14:48.452: INFO: Waiting up to 5m0s for pod "pod-0d8c10d9-f760-4fe0-ac32-c29fdf95d9af" in namespace "emptydir-9514" to be "success or failure" Mar 19 22:14:48.471: INFO: Pod "pod-0d8c10d9-f760-4fe0-ac32-c29fdf95d9af": Phase="Pending", Reason="", readiness=false. Elapsed: 18.698449ms Mar 19 22:14:50.475: INFO: Pod "pod-0d8c10d9-f760-4fe0-ac32-c29fdf95d9af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022517751s Mar 19 22:14:52.479: INFO: Pod "pod-0d8c10d9-f760-4fe0-ac32-c29fdf95d9af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026072041s STEP: Saw pod success Mar 19 22:14:52.479: INFO: Pod "pod-0d8c10d9-f760-4fe0-ac32-c29fdf95d9af" satisfied condition "success or failure" Mar 19 22:14:52.480: INFO: Trying to get logs from node jerma-worker2 pod pod-0d8c10d9-f760-4fe0-ac32-c29fdf95d9af container test-container: STEP: delete the pod Mar 19 22:14:52.662: INFO: Waiting for pod pod-0d8c10d9-f760-4fe0-ac32-c29fdf95d9af to disappear Mar 19 22:14:52.678: INFO: Pod pod-0d8c10d9-f760-4fe0-ac32-c29fdf95d9af no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:14:52.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9514" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4158,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:14:52.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 19 22:14:53.224: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 19 22:14:55.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252893, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252893, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252893, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252893, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 22:14:58.289: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:14:58.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-806" for this suite. STEP: Destroying namespace "webhook-806-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.800 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":261,"skipped":4171,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:14:58.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-623b819e-3e03-44ef-9436-0ef3da870abf STEP: Creating a pod to test consume secrets Mar 19 22:14:58.594: INFO: Waiting up to 5m0s for pod "pod-secrets-7accedf1-2cfc-482c-9c9b-a2091b4801b6" in namespace "secrets-859" to be "success or failure" Mar 19 22:14:58.597: INFO: Pod "pod-secrets-7accedf1-2cfc-482c-9c9b-a2091b4801b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.393901ms Mar 19 22:15:00.600: INFO: Pod "pod-secrets-7accedf1-2cfc-482c-9c9b-a2091b4801b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005947785s Mar 19 22:15:02.605: INFO: Pod "pod-secrets-7accedf1-2cfc-482c-9c9b-a2091b4801b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010611956s STEP: Saw pod success Mar 19 22:15:02.605: INFO: Pod "pod-secrets-7accedf1-2cfc-482c-9c9b-a2091b4801b6" satisfied condition "success or failure" Mar 19 22:15:02.608: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-7accedf1-2cfc-482c-9c9b-a2091b4801b6 container secret-env-test: STEP: delete the pod Mar 19 22:15:02.646: INFO: Waiting for pod pod-secrets-7accedf1-2cfc-482c-9c9b-a2091b4801b6 to disappear Mar 19 22:15:02.649: INFO: Pod pod-secrets-7accedf1-2cfc-482c-9c9b-a2091b4801b6 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:15:02.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-859" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4183,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:15:02.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 22:15:02.715: INFO: Waiting up to 5m0s for pod "busybox-user-65534-d4072252-6bd0-4511-967b-945e86c966bd" in namespace "security-context-test-7163" to be "success or failure" Mar 19 22:15:02.720: INFO: Pod "busybox-user-65534-d4072252-6bd0-4511-967b-945e86c966bd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.699117ms Mar 19 22:15:04.724: INFO: Pod "busybox-user-65534-d4072252-6bd0-4511-967b-945e86c966bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009715026s Mar 19 22:15:06.729: INFO: Pod "busybox-user-65534-d4072252-6bd0-4511-967b-945e86c966bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014181273s Mar 19 22:15:06.729: INFO: Pod "busybox-user-65534-d4072252-6bd0-4511-967b-945e86c966bd" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:15:06.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7163" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4194,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:15:06.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 19 22:15:07.139: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 19 22:15:09.147: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252907, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252907, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252907, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252907, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 22:15:12.202: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:15:12.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1836" for this suite. STEP: Destroying namespace "webhook-1836-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.655 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":264,"skipped":4242,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:15:12.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 22:15:12.452: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:15:16.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2048" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4254,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:15:16.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 19 22:15:16.958: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 19 22:15:19.013: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252916, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252916, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252917, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252916, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 19 22:15:21.022: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252916, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252916, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252917, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720252916, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 19 22:15:24.042: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 19 22:15:24.067: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:15:24.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7518" for this suite. STEP: Destroying namespace "webhook-7518-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.681 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":266,"skipped":4269,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:15:24.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 19 22:15:27.352: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:15:27.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5603" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4291,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:15:27.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 19 22:15:27.711: INFO: Waiting up to 5m0s for pod "pod-d63ff48a-242b-41b3-8995-3904e8025bb3" in namespace "emptydir-9651" to be "success or failure" Mar 19 22:15:27.717: INFO: Pod "pod-d63ff48a-242b-41b3-8995-3904e8025bb3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.216032ms Mar 19 22:15:29.720: INFO: Pod "pod-d63ff48a-242b-41b3-8995-3904e8025bb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00950678s Mar 19 22:15:31.725: INFO: Pod "pod-d63ff48a-242b-41b3-8995-3904e8025bb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013761193s STEP: Saw pod success Mar 19 22:15:31.725: INFO: Pod "pod-d63ff48a-242b-41b3-8995-3904e8025bb3" satisfied condition "success or failure" Mar 19 22:15:31.728: INFO: Trying to get logs from node jerma-worker pod pod-d63ff48a-242b-41b3-8995-3904e8025bb3 container test-container: STEP: delete the pod Mar 19 22:15:32.034: INFO: Waiting for pod pod-d63ff48a-242b-41b3-8995-3904e8025bb3 to disappear Mar 19 22:15:32.045: INFO: Pod pod-d63ff48a-242b-41b3-8995-3904e8025bb3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:15:32.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9651" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4301,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:15:32.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:15:38.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9128" for this suite. STEP: Destroying namespace "nsdeletetest-5544" for this suite. Mar 19 22:15:38.272: INFO: Namespace nsdeletetest-5544 was already deleted STEP: Destroying namespace "nsdeletetest-8720" for this suite. • [SLOW TEST:6.212 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":269,"skipped":4308,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:15:38.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:15:42.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-898" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":270,"skipped":4312,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:15:42.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-d65e8902-e0dd-4661-9b66-fc2ac5ee920e STEP: Creating a pod to test consume configMaps Mar 19 22:15:42.576: INFO: Waiting up to 5m0s for pod "pod-configmaps-f2d59635-94f1-4df2-a5a3-18dcb6a7301c" in namespace "configmap-6515" to be "success or failure" Mar 19 22:15:42.751: INFO: Pod "pod-configmaps-f2d59635-94f1-4df2-a5a3-18dcb6a7301c": Phase="Pending", Reason="", readiness=false. Elapsed: 174.914836ms Mar 19 22:15:44.755: INFO: Pod "pod-configmaps-f2d59635-94f1-4df2-a5a3-18dcb6a7301c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179194668s Mar 19 22:15:46.759: INFO: Pod "pod-configmaps-f2d59635-94f1-4df2-a5a3-18dcb6a7301c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.183307266s STEP: Saw pod success Mar 19 22:15:46.759: INFO: Pod "pod-configmaps-f2d59635-94f1-4df2-a5a3-18dcb6a7301c" satisfied condition "success or failure" Mar 19 22:15:46.762: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-f2d59635-94f1-4df2-a5a3-18dcb6a7301c container configmap-volume-test: STEP: delete the pod Mar 19 22:15:46.818: INFO: Waiting for pod pod-configmaps-f2d59635-94f1-4df2-a5a3-18dcb6a7301c to disappear Mar 19 22:15:46.831: INFO: Pod pod-configmaps-f2d59635-94f1-4df2-a5a3-18dcb6a7301c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:15:46.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6515" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4321,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:15:46.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 19 22:15:54.972: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 19 22:15:54.993: INFO: Pod pod-with-prestop-http-hook still exists Mar 19 22:15:56.993: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 19 22:15:56.997: INFO: Pod pod-with-prestop-http-hook still exists Mar 19 22:15:58.994: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 19 22:15:58.998: INFO: Pod pod-with-prestop-http-hook still exists Mar 19 22:16:00.993: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 19 22:16:00.997: INFO: Pod pod-with-prestop-http-hook still exists Mar 19 22:16:02.993: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 19 22:16:02.998: INFO: Pod pod-with-prestop-http-hook still exists Mar 19 22:16:04.994: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 19 22:16:04.998: INFO: Pod pod-with-prestop-http-hook still exists Mar 19 22:16:06.994: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 19 22:16:06.998: INFO: Pod pod-with-prestop-http-hook still exists Mar 19 22:16:08.994: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 19 22:16:08.998: INFO: Pod pod-with-prestop-http-hook still exists Mar 19 22:16:10.993: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 19 22:16:10.997: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:16:11.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9256" for this suite. • [SLOW TEST:24.172 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:16:11.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 19 22:16:11.089: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:16:15.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4815" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4365,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:16:15.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:16:30.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3033" for this suite. STEP: Destroying namespace "nsdeletetest-2196" for this suite. Mar 19 22:16:30.447: INFO: Namespace nsdeletetest-2196 was already deleted STEP: Destroying namespace "nsdeletetest-7428" for this suite. • [SLOW TEST:15.222 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":274,"skipped":4424,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:16:30.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0319 22:16:41.466790 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 19 22:16:41.466: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:16:41.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4845" for this suite. • [SLOW TEST:11.024 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":275,"skipped":4432,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:16:41.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-31fd6865-9708-4e01-80b5-f1d384345a2b in namespace container-probe-8853 Mar 19 22:16:45.566: INFO: Started pod busybox-31fd6865-9708-4e01-80b5-f1d384345a2b in namespace container-probe-8853 STEP: checking the pod's current state and verifying that restartCount is present Mar 19 22:16:45.570: INFO: Initial restart count of pod busybox-31fd6865-9708-4e01-80b5-f1d384345a2b is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:20:46.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8853" for this suite. • [SLOW TEST:245.066 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:20:46.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Mar 19 22:20:51.114: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1198 pod-service-account-5937393a-33ec-47a2-ae66-e383d8bb60d5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 19 22:20:51.349: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1198 pod-service-account-5937393a-33ec-47a2-ae66-e383d8bb60d5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 19 22:20:51.547: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1198 pod-service-account-5937393a-33ec-47a2-ae66-e383d8bb60d5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:20:51.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1198" for this suite. • [SLOW TEST:5.266 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":277,"skipped":4479,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 19 22:20:51.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 19 22:20:51.873: INFO: Waiting up to 5m0s for pod "pod-7de4b2bf-0faf-4288-b720-260bbe16024c" in namespace "emptydir-3738" to be "success or failure" Mar 19 22:20:51.876: INFO: Pod "pod-7de4b2bf-0faf-4288-b720-260bbe16024c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.337156ms Mar 19 22:20:53.882: INFO: Pod "pod-7de4b2bf-0faf-4288-b720-260bbe16024c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008621544s Mar 19 22:20:55.886: INFO: Pod "pod-7de4b2bf-0faf-4288-b720-260bbe16024c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01272963s STEP: Saw pod success Mar 19 22:20:55.886: INFO: Pod "pod-7de4b2bf-0faf-4288-b720-260bbe16024c" satisfied condition "success or failure" Mar 19 22:20:55.889: INFO: Trying to get logs from node jerma-worker pod pod-7de4b2bf-0faf-4288-b720-260bbe16024c container test-container: STEP: delete the pod Mar 19 22:20:55.917: INFO: Waiting for pod pod-7de4b2bf-0faf-4288-b720-260bbe16024c to disappear Mar 19 22:20:55.922: INFO: Pod pod-7de4b2bf-0faf-4288-b720-260bbe16024c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 19 22:20:55.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3738" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4552,"failed":0} SSSSSSSSSSSSSMar 19 22:20:55.929: INFO: Running AfterSuite actions on all nodes Mar 19 22:20:55.929: INFO: Running AfterSuite actions on node 1 Mar 19 22:20:55.929: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4565,"failed":0} Ran 278 of 4843 Specs in 4396.237 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4565 Skipped PASS