I0602 23:38:29.771666 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0602 23:38:29.783538 7 e2e.go:129] Starting e2e run "6f7bef9f-c3a0-4567-970d-1ae4b3b83615" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1591141107 - Will randomize all specs Will run 288 of 5095 specs Jun 2 23:38:29.852: INFO: >>> kubeConfig: /root/.kube/config Jun 2 23:38:29.874: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 2 23:38:29.938: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 2 23:38:29.981: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 2 23:38:29.981: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 2 23:38:29.981: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 2 23:38:29.989: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 2 23:38:29.989: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 2 23:38:29.989: INFO: e2e test version: v1.19.0-alpha.3.35+3416442e4b7eeb Jun 2 23:38:29.991: INFO: kube-apiserver version: v1.18.2 Jun 2 23:38:29.991: INFO: >>> kubeConfig: /root/.kube/config Jun 2 23:38:29.996: INFO: Cluster IP family: ipv4 SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:38:29.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath Jun 2 23:38:30.096: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-htxd STEP: Creating a pod to test atomic-volume-subpath Jun 2 23:38:30.138: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-htxd" in namespace "subpath-1318" to be "Succeeded or Failed" Jun 2 23:38:30.188: INFO: Pod "pod-subpath-test-projected-htxd": Phase="Pending", Reason="", readiness=false. Elapsed: 49.676766ms Jun 2 23:38:32.308: INFO: Pod "pod-subpath-test-projected-htxd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169640836s Jun 2 23:38:34.311: INFO: Pod "pod-subpath-test-projected-htxd": Phase="Running", Reason="", readiness=true. Elapsed: 4.172846356s Jun 2 23:38:36.316: INFO: Pod "pod-subpath-test-projected-htxd": Phase="Running", Reason="", readiness=true. Elapsed: 6.177535162s Jun 2 23:38:38.321: INFO: Pod "pod-subpath-test-projected-htxd": Phase="Running", Reason="", readiness=true. Elapsed: 8.182506419s Jun 2 23:38:40.325: INFO: Pod "pod-subpath-test-projected-htxd": Phase="Running", Reason="", readiness=true. Elapsed: 10.186682585s Jun 2 23:38:42.328: INFO: Pod "pod-subpath-test-projected-htxd": Phase="Running", Reason="", readiness=true. Elapsed: 12.189934197s Jun 2 23:38:44.335: INFO: Pod "pod-subpath-test-projected-htxd": Phase="Running", Reason="", readiness=true. Elapsed: 14.196291027s Jun 2 23:38:46.338: INFO: Pod "pod-subpath-test-projected-htxd": Phase="Running", Reason="", readiness=true. Elapsed: 16.199963237s Jun 2 23:38:48.342: INFO: Pod "pod-subpath-test-projected-htxd": Phase="Running", Reason="", readiness=true. Elapsed: 18.204100642s Jun 2 23:38:50.362: INFO: Pod "pod-subpath-test-projected-htxd": Phase="Running", Reason="", readiness=true. Elapsed: 20.223279658s Jun 2 23:38:52.366: INFO: Pod "pod-subpath-test-projected-htxd": Phase="Running", Reason="", readiness=true. Elapsed: 22.227304241s Jun 2 23:38:54.370: INFO: Pod "pod-subpath-test-projected-htxd": Phase="Running", Reason="", readiness=true. Elapsed: 24.23169311s Jun 2 23:38:56.374: INFO: Pod "pod-subpath-test-projected-htxd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.235879494s STEP: Saw pod success Jun 2 23:38:56.374: INFO: Pod "pod-subpath-test-projected-htxd" satisfied condition "Succeeded or Failed" Jun 2 23:38:56.377: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-htxd container test-container-subpath-projected-htxd: STEP: delete the pod Jun 2 23:38:56.406: INFO: Waiting for pod pod-subpath-test-projected-htxd to disappear Jun 2 23:38:56.411: INFO: Pod pod-subpath-test-projected-htxd no longer exists STEP: Deleting pod pod-subpath-test-projected-htxd Jun 2 23:38:56.411: INFO: Deleting pod "pod-subpath-test-projected-htxd" in namespace "subpath-1318" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:38:56.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1318" for this suite. • [SLOW TEST:26.446 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":1,"skipped":10,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:38:56.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 2 23:38:56.545: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-47deb376-7092-46a5-89a4-c14ca598dff5" in namespace "security-context-test-3957" to be "Succeeded or Failed" Jun 2 23:38:56.548: INFO: Pod "busybox-readonly-false-47deb376-7092-46a5-89a4-c14ca598dff5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.764038ms Jun 2 23:38:58.577: INFO: Pod "busybox-readonly-false-47deb376-7092-46a5-89a4-c14ca598dff5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031847669s Jun 2 23:39:00.581: INFO: Pod "busybox-readonly-false-47deb376-7092-46a5-89a4-c14ca598dff5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0360153s Jun 2 23:39:00.581: INFO: Pod "busybox-readonly-false-47deb376-7092-46a5-89a4-c14ca598dff5" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:39:00.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3957" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":2,"skipped":24,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:39:00.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 2 23:39:01.899: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726737941, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726737941, loc:(*time.Location)(0x7c342a0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-75dd644756\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726737941, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726737941, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Jun 2 23:39:03.903: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726737941, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726737941, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726737941, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726737941, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 2 23:39:06.926: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:39:07.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9639" for this suite. STEP: Destroying namespace "webhook-9639-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.095 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":3,"skipped":42,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:39:07.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Jun 2 23:39:07.818: INFO: Waiting up to 5m0s for pod "var-expansion-7d75216c-ea48-4be2-8bff-cfe4aae748d1" in namespace "var-expansion-7770" to be "Succeeded or Failed" Jun 2 23:39:07.834: INFO: Pod "var-expansion-7d75216c-ea48-4be2-8bff-cfe4aae748d1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.056444ms Jun 2 23:39:09.838: INFO: Pod "var-expansion-7d75216c-ea48-4be2-8bff-cfe4aae748d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020034626s Jun 2 23:39:11.843: INFO: Pod "var-expansion-7d75216c-ea48-4be2-8bff-cfe4aae748d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024573318s STEP: Saw pod success Jun 2 23:39:11.843: INFO: Pod "var-expansion-7d75216c-ea48-4be2-8bff-cfe4aae748d1" satisfied condition "Succeeded or Failed" Jun 2 23:39:11.846: INFO: Trying to get logs from node latest-worker2 pod var-expansion-7d75216c-ea48-4be2-8bff-cfe4aae748d1 container dapi-container: STEP: delete the pod Jun 2 23:39:12.039: INFO: Waiting for pod var-expansion-7d75216c-ea48-4be2-8bff-cfe4aae748d1 to disappear Jun 2 23:39:12.079: INFO: Pod var-expansion-7d75216c-ea48-4be2-8bff-cfe4aae748d1 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:39:12.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7770" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":4,"skipped":61,"failed":0} SS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:39:12.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Jun 2 23:39:17.170: INFO: Pod pod-hostip-fc5d8e50-ab97-47f1-8f8d-d349cbe3cd94 has hostIP: 172.17.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:39:17.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9471" for this suite. • [SLOW TEST:5.100 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":5,"skipped":63,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:39:17.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 2 23:39:17.314: INFO: Creating ReplicaSet my-hostname-basic-3cf14cf1-235e-46b0-8e59-eb9d0de12bc3 Jun 2 23:39:17.352: INFO: Pod name my-hostname-basic-3cf14cf1-235e-46b0-8e59-eb9d0de12bc3: Found 0 pods out of 1 Jun 2 23:39:22.356: INFO: Pod name my-hostname-basic-3cf14cf1-235e-46b0-8e59-eb9d0de12bc3: Found 1 pods out of 1 Jun 2 23:39:22.356: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-3cf14cf1-235e-46b0-8e59-eb9d0de12bc3" is running Jun 2 23:39:22.360: INFO: Pod "my-hostname-basic-3cf14cf1-235e-46b0-8e59-eb9d0de12bc3-79j8h" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-02 23:39:17 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-02 23:39:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-02 23:39:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-02 23:39:17 +0000 UTC Reason: Message:}]) Jun 2 23:39:22.422: INFO: Trying to dial the pod Jun 2 23:39:27.433: INFO: Controller my-hostname-basic-3cf14cf1-235e-46b0-8e59-eb9d0de12bc3: Got expected result from replica 1 [my-hostname-basic-3cf14cf1-235e-46b0-8e59-eb9d0de12bc3-79j8h]: "my-hostname-basic-3cf14cf1-235e-46b0-8e59-eb9d0de12bc3-79j8h", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:39:27.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7688" for this suite. • [SLOW TEST:10.247 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":6,"skipped":68,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:39:27.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jun 2 23:39:32.179: INFO: Successfully updated pod "labelsupdate1a84e9b9-c64f-4a71-b53e-101b74d385d2" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:39:34.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8487" for this suite. • [SLOW TEST:6.779 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":7,"skipped":85,"failed":0} [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:39:34.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-d04462c0-8592-4376-bb73-859605fa5915 STEP: Creating a pod to test consume secrets Jun 2 23:39:34.379: INFO: Waiting up to 5m0s for pod "pod-secrets-4c27786a-0475-4052-b994-b60a4bdddaf0" in namespace "secrets-4412" to be "Succeeded or Failed" Jun 2 23:39:34.383: INFO: Pod "pod-secrets-4c27786a-0475-4052-b994-b60a4bdddaf0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.949055ms Jun 2 23:39:36.387: INFO: Pod "pod-secrets-4c27786a-0475-4052-b994-b60a4bdddaf0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00876179s Jun 2 23:39:38.391: INFO: Pod "pod-secrets-4c27786a-0475-4052-b994-b60a4bdddaf0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012547022s STEP: Saw pod success Jun 2 23:39:38.391: INFO: Pod "pod-secrets-4c27786a-0475-4052-b994-b60a4bdddaf0" satisfied condition "Succeeded or Failed" Jun 2 23:39:38.394: INFO: Trying to get logs from node latest-worker pod pod-secrets-4c27786a-0475-4052-b994-b60a4bdddaf0 container secret-volume-test: STEP: delete the pod Jun 2 23:39:38.416: INFO: Waiting for pod pod-secrets-4c27786a-0475-4052-b994-b60a4bdddaf0 to disappear Jun 2 23:39:38.421: INFO: Pod pod-secrets-4c27786a-0475-4052-b994-b60a4bdddaf0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:39:38.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4412" for this suite. STEP: Destroying namespace "secret-namespace-4861" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":8,"skipped":85,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:39:38.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 2 23:39:42.921: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:39:43.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5276" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":9,"skipped":97,"failed":0} SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:39:43.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jun 2 23:39:43.085: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:39:50.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8962" for this suite. • [SLOW TEST:7.645 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":10,"skipped":101,"failed":0} [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:39:50.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:40:24.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2032" for this suite. • [SLOW TEST:33.853 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":11,"skipped":101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:40:24.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-cpqp STEP: Creating a pod to test atomic-volume-subpath Jun 2 23:40:24.610: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-cpqp" in namespace "subpath-3929" to be "Succeeded or Failed" Jun 2 23:40:24.622: INFO: Pod "pod-subpath-test-configmap-cpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 11.411014ms Jun 2 23:40:26.626: INFO: Pod "pod-subpath-test-configmap-cpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015888709s Jun 2 23:40:28.631: INFO: Pod "pod-subpath-test-configmap-cpqp": Phase="Running", Reason="", readiness=true. Elapsed: 4.02066675s Jun 2 23:40:30.638: INFO: Pod "pod-subpath-test-configmap-cpqp": Phase="Running", Reason="", readiness=true. Elapsed: 6.027505577s Jun 2 23:40:32.642: INFO: Pod "pod-subpath-test-configmap-cpqp": Phase="Running", Reason="", readiness=true. Elapsed: 8.031909562s Jun 2 23:40:34.647: INFO: Pod "pod-subpath-test-configmap-cpqp": Phase="Running", Reason="", readiness=true. Elapsed: 10.036637763s Jun 2 23:40:36.652: INFO: Pod "pod-subpath-test-configmap-cpqp": Phase="Running", Reason="", readiness=true. Elapsed: 12.041331222s Jun 2 23:40:38.656: INFO: Pod "pod-subpath-test-configmap-cpqp": Phase="Running", Reason="", readiness=true. Elapsed: 14.045687438s Jun 2 23:40:40.662: INFO: Pod "pod-subpath-test-configmap-cpqp": Phase="Running", Reason="", readiness=true. Elapsed: 16.051789524s Jun 2 23:40:42.666: INFO: Pod "pod-subpath-test-configmap-cpqp": Phase="Running", Reason="", readiness=true. Elapsed: 18.056014308s Jun 2 23:40:44.671: INFO: Pod "pod-subpath-test-configmap-cpqp": Phase="Running", Reason="", readiness=true. Elapsed: 20.060893152s Jun 2 23:40:46.676: INFO: Pod "pod-subpath-test-configmap-cpqp": Phase="Running", Reason="", readiness=true. Elapsed: 22.065705845s Jun 2 23:40:48.681: INFO: Pod "pod-subpath-test-configmap-cpqp": Phase="Running", Reason="", readiness=true. Elapsed: 24.070687205s Jun 2 23:40:50.686: INFO: Pod "pod-subpath-test-configmap-cpqp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.075497691s STEP: Saw pod success Jun 2 23:40:50.686: INFO: Pod "pod-subpath-test-configmap-cpqp" satisfied condition "Succeeded or Failed" Jun 2 23:40:50.689: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-cpqp container test-container-subpath-configmap-cpqp: STEP: delete the pod Jun 2 23:40:50.760: INFO: Waiting for pod pod-subpath-test-configmap-cpqp to disappear Jun 2 23:40:50.770: INFO: Pod pod-subpath-test-configmap-cpqp no longer exists STEP: Deleting pod pod-subpath-test-configmap-cpqp Jun 2 23:40:50.770: INFO: Deleting pod "pod-subpath-test-configmap-cpqp" in namespace "subpath-3929" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:40:50.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3929" for this suite. • [SLOW TEST:26.245 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":12,"skipped":126,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:40:50.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-64633513-3141-432c-95aa-d1f378378f41 STEP: Creating a pod to test consume configMaps Jun 2 23:40:50.880: INFO: Waiting up to 5m0s for pod "pod-configmaps-22b91fc9-a64a-4f84-8305-6cf115e4e3b8" in namespace "configmap-7857" to be "Succeeded or Failed" Jun 2 23:40:50.896: INFO: Pod "pod-configmaps-22b91fc9-a64a-4f84-8305-6cf115e4e3b8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.798287ms Jun 2 23:40:52.900: INFO: Pod "pod-configmaps-22b91fc9-a64a-4f84-8305-6cf115e4e3b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01959789s Jun 2 23:40:54.903: INFO: Pod "pod-configmaps-22b91fc9-a64a-4f84-8305-6cf115e4e3b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023005289s STEP: Saw pod success Jun 2 23:40:54.903: INFO: Pod "pod-configmaps-22b91fc9-a64a-4f84-8305-6cf115e4e3b8" satisfied condition "Succeeded or Failed" Jun 2 23:40:54.906: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-22b91fc9-a64a-4f84-8305-6cf115e4e3b8 container configmap-volume-test: STEP: delete the pod Jun 2 23:40:54.934: INFO: Waiting for pod pod-configmaps-22b91fc9-a64a-4f84-8305-6cf115e4e3b8 to disappear Jun 2 23:40:54.944: INFO: Pod pod-configmaps-22b91fc9-a64a-4f84-8305-6cf115e4e3b8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:40:54.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7857" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":13,"skipped":137,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:40:54.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6493.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6493.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6493.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6493.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6493.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6493.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 2 23:41:01.089: INFO: DNS probes using dns-6493/dns-test-7c90a1fd-37fc-43ff-a579-d7febbc42242 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:41:01.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6493" for this suite. • [SLOW TEST:6.268 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":14,"skipped":142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:41:01.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 2 23:41:02.381: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 2 23:41:04.446: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738062, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738062, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738062, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738062, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 2 23:41:07.471: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:41:07.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4866" for this suite. STEP: Destroying namespace "webhook-4866-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.406 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":15,"skipped":179,"failed":0} [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:41:07.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 2 23:41:07.733: INFO: Creating deployment "test-recreate-deployment" Jun 2 23:41:07.758: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 2 23:41:07.839: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jun 2 23:41:09.846: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 2 23:41:09.849: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738067, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738067, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738067, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738067, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 2 23:41:11.856: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 2 23:41:11.864: INFO: Updating deployment test-recreate-deployment Jun 2 23:41:11.864: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jun 2 23:41:12.547: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9035 /apis/apps/v1/namespaces/deployment-9035/deployments/test-recreate-deployment 700d4458-327e-4c46-b7c0-fdd977d5e7d5 9792457 2 2020-06-02 23:41:07 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-06-02 23:41:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-02 23:41:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003728418 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-06-02 23:41:12 +0000 UTC,LastTransitionTime:2020-06-02 23:41:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-06-02 23:41:12 +0000 UTC,LastTransitionTime:2020-06-02 23:41:07 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jun 2 23:41:12.551: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-9035 /apis/apps/v1/namespaces/deployment-9035/replicasets/test-recreate-deployment-d5667d9c7 99003298-71ec-45a1-aa88-ffcb5dbe0bb8 9792455 1 2020-06-02 23:41:12 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 700d4458-327e-4c46-b7c0-fdd977d5e7d5 0xc003728920 0xc003728921}] [] [{kube-controller-manager Update apps/v1 2020-06-02 23:41:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"700d4458-327e-4c46-b7c0-fdd977d5e7d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003728998 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 2 23:41:12.551: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 2 23:41:12.551: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-9035 /apis/apps/v1/namespaces/deployment-9035/replicasets/test-recreate-deployment-6d65b9f6d8 aa2afa3d-6879-410e-b2b4-a25e5c7fd99c 9792444 2 2020-06-02 23:41:07 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 700d4458-327e-4c46-b7c0-fdd977d5e7d5 0xc003728827 0xc003728828}] [] [{kube-controller-manager Update apps/v1 2020-06-02 23:41:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"700d4458-327e-4c46-b7c0-fdd977d5e7d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0037288b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 2 23:41:12.555: INFO: Pod "test-recreate-deployment-d5667d9c7-4q4nv" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-4q4nv test-recreate-deployment-d5667d9c7- deployment-9035 /api/v1/namespaces/deployment-9035/pods/test-recreate-deployment-d5667d9c7-4q4nv b0748315-6196-47f1-a66b-ed036824afa7 9792458 0 2020-06-02 23:41:12 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 99003298-71ec-45a1-aa88-ffcb5dbe0bb8 0xc003728e60 0xc003728e61}] [] [{kube-controller-manager Update v1 2020-06-02 23:41:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99003298-71ec-45a1-aa88-ffcb5dbe0bb8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-02 23:41:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zkxp4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zkxp4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zkxp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:41:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:41:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:41:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:41:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-02 23:41:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:41:12.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9035" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":16,"skipped":179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:41:12.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 2 23:41:12.684: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 2 23:41:13.454: INFO: Waiting for terminating namespaces to be deleted... Jun 2 23:41:13.549: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jun 2 23:41:13.554: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jun 2 23:41:13.554: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jun 2 23:41:13.554: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jun 2 23:41:13.554: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jun 2 23:41:13.554: INFO: test-recreate-deployment-d5667d9c7-4q4nv from deployment-9035 started at 2020-06-02 23:41:12 +0000 UTC (1 container statuses recorded) Jun 2 23:41:13.554: INFO: Container httpd ready: false, restart count 0 Jun 2 23:41:13.554: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 2 23:41:13.554: INFO: Container kindnet-cni ready: true, restart count 2 Jun 2 23:41:13.554: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 2 23:41:13.554: INFO: Container kube-proxy ready: true, restart count 0 Jun 2 23:41:13.554: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jun 2 23:41:13.558: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jun 2 23:41:13.558: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jun 2 23:41:13.558: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jun 2 23:41:13.558: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jun 2 23:41:13.558: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 2 23:41:13.558: INFO: Container kindnet-cni ready: true, restart count 2 Jun 2 23:41:13.558: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 2 23:41:13.558: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-bb191032-513a-4527-874d-205c85c0ec2c 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-bb191032-513a-4527-874d-205c85c0ec2c off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-bb191032-513a-4527-874d-205c85c0ec2c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:41:30.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9260" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:17.599 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":17,"skipped":224,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:41:30.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-bbc6f20c-7816-40d9-b1c9-9a194890f1d3 STEP: Creating a pod to test consume configMaps Jun 2 23:41:30.234: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f1d35eb1-49de-4f8e-8d88-0b1abd7a2732" in namespace "projected-3652" to be "Succeeded or Failed" Jun 2 23:41:30.262: INFO: Pod "pod-projected-configmaps-f1d35eb1-49de-4f8e-8d88-0b1abd7a2732": Phase="Pending", Reason="", readiness=false. Elapsed: 27.442732ms Jun 2 23:41:32.267: INFO: Pod "pod-projected-configmaps-f1d35eb1-49de-4f8e-8d88-0b1abd7a2732": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032373682s Jun 2 23:41:34.272: INFO: Pod "pod-projected-configmaps-f1d35eb1-49de-4f8e-8d88-0b1abd7a2732": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037721623s STEP: Saw pod success Jun 2 23:41:34.272: INFO: Pod "pod-projected-configmaps-f1d35eb1-49de-4f8e-8d88-0b1abd7a2732" satisfied condition "Succeeded or Failed" Jun 2 23:41:34.275: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-f1d35eb1-49de-4f8e-8d88-0b1abd7a2732 container projected-configmap-volume-test: STEP: delete the pod Jun 2 23:41:34.335: INFO: Waiting for pod pod-projected-configmaps-f1d35eb1-49de-4f8e-8d88-0b1abd7a2732 to disappear Jun 2 23:41:34.442: INFO: Pod pod-projected-configmaps-f1d35eb1-49de-4f8e-8d88-0b1abd7a2732 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:41:34.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3652" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":18,"skipped":227,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:41:34.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-683 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-683 to expose endpoints map[] Jun 2 23:41:34.601: INFO: Get endpoints failed (3.742951ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jun 2 23:41:35.626: INFO: successfully validated that service endpoint-test2 in namespace services-683 exposes endpoints map[] (1.029169623s elapsed) STEP: Creating pod pod1 in namespace services-683 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-683 to expose endpoints map[pod1:[80]] Jun 2 23:41:39.886: INFO: successfully validated that service endpoint-test2 in namespace services-683 exposes endpoints map[pod1:[80]] (4.220987758s elapsed) STEP: Creating pod pod2 in namespace services-683 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-683 to expose endpoints map[pod1:[80] pod2:[80]] Jun 2 23:41:43.033: INFO: successfully validated that service endpoint-test2 in namespace services-683 exposes endpoints map[pod1:[80] pod2:[80]] (3.130545722s elapsed) STEP: Deleting pod pod1 in namespace services-683 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-683 to expose endpoints map[pod2:[80]] Jun 2 23:41:44.072: INFO: successfully validated that service endpoint-test2 in namespace services-683 exposes endpoints map[pod2:[80]] (1.033733615s elapsed) STEP: Deleting pod pod2 in namespace services-683 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-683 to expose endpoints map[] Jun 2 23:41:45.159: INFO: successfully validated that service endpoint-test2 in namespace services-683 exposes endpoints map[] (1.0831283s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:41:45.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-683" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:11.089 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":19,"skipped":237,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:41:45.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 2 23:41:45.652: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6477071c-8c72-488b-842b-203f645df0dc" in namespace "projected-3768" to be "Succeeded or Failed" Jun 2 23:41:45.694: INFO: Pod "downwardapi-volume-6477071c-8c72-488b-842b-203f645df0dc": Phase="Pending", Reason="", readiness=false. Elapsed: 41.867259ms Jun 2 23:41:47.908: INFO: Pod "downwardapi-volume-6477071c-8c72-488b-842b-203f645df0dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.256450268s Jun 2 23:41:49.912: INFO: Pod "downwardapi-volume-6477071c-8c72-488b-842b-203f645df0dc": Phase="Running", Reason="", readiness=true. Elapsed: 4.260261458s Jun 2 23:41:51.917: INFO: Pod "downwardapi-volume-6477071c-8c72-488b-842b-203f645df0dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.26472657s STEP: Saw pod success Jun 2 23:41:51.917: INFO: Pod "downwardapi-volume-6477071c-8c72-488b-842b-203f645df0dc" satisfied condition "Succeeded or Failed" Jun 2 23:41:51.920: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6477071c-8c72-488b-842b-203f645df0dc container client-container: STEP: delete the pod Jun 2 23:41:51.982: INFO: Waiting for pod downwardapi-volume-6477071c-8c72-488b-842b-203f645df0dc to disappear Jun 2 23:41:51.989: INFO: Pod downwardapi-volume-6477071c-8c72-488b-842b-203f645df0dc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:41:51.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3768" for this suite. • [SLOW TEST:6.455 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":20,"skipped":247,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:41:51.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Jun 2 23:41:52.107: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Jun 2 23:41:52.121: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jun 2 23:41:52.121: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Jun 2 23:41:52.145: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jun 2 23:41:52.145: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Jun 2 23:41:52.186: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Jun 2 23:41:52.186: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Jun 2 23:41:59.743: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:41:59.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-2154" for this suite. • [SLOW TEST:7.804 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":21,"skipped":254,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:41:59.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 2 23:41:59.972: INFO: Waiting up to 5m0s for pod "pod-0439db99-deec-46d1-9226-10d7929a5e7b" in namespace "emptydir-2153" to be "Succeeded or Failed" Jun 2 23:41:59.992: INFO: Pod "pod-0439db99-deec-46d1-9226-10d7929a5e7b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.590857ms Jun 2 23:42:02.006: INFO: Pod "pod-0439db99-deec-46d1-9226-10d7929a5e7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034279161s Jun 2 23:42:04.034: INFO: Pod "pod-0439db99-deec-46d1-9226-10d7929a5e7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061758864s Jun 2 23:42:06.041: INFO: Pod "pod-0439db99-deec-46d1-9226-10d7929a5e7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069173413s STEP: Saw pod success Jun 2 23:42:06.041: INFO: Pod "pod-0439db99-deec-46d1-9226-10d7929a5e7b" satisfied condition "Succeeded or Failed" Jun 2 23:42:06.352: INFO: Trying to get logs from node latest-worker pod pod-0439db99-deec-46d1-9226-10d7929a5e7b container test-container: STEP: delete the pod Jun 2 23:42:06.888: INFO: Waiting for pod pod-0439db99-deec-46d1-9226-10d7929a5e7b to disappear Jun 2 23:42:07.023: INFO: Pod pod-0439db99-deec-46d1-9226-10d7929a5e7b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:42:07.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2153" for this suite. • [SLOW TEST:7.389 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":22,"skipped":256,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:42:07.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-c3f88727-2e98-4188-bf3a-08f736d60188 STEP: Creating a pod to test consume configMaps Jun 2 23:42:07.608: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1073a1bc-5fe6-4af1-9345-f331e99cb157" in namespace "projected-4862" to be "Succeeded or Failed" Jun 2 23:42:07.627: INFO: Pod "pod-projected-configmaps-1073a1bc-5fe6-4af1-9345-f331e99cb157": Phase="Pending", Reason="", readiness=false. Elapsed: 19.37129ms Jun 2 23:42:09.632: INFO: Pod "pod-projected-configmaps-1073a1bc-5fe6-4af1-9345-f331e99cb157": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023603927s Jun 2 23:42:11.636: INFO: Pod "pod-projected-configmaps-1073a1bc-5fe6-4af1-9345-f331e99cb157": Phase="Running", Reason="", readiness=true. Elapsed: 4.028122704s Jun 2 23:42:13.641: INFO: Pod "pod-projected-configmaps-1073a1bc-5fe6-4af1-9345-f331e99cb157": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032784324s STEP: Saw pod success Jun 2 23:42:13.641: INFO: Pod "pod-projected-configmaps-1073a1bc-5fe6-4af1-9345-f331e99cb157" satisfied condition "Succeeded or Failed" Jun 2 23:42:13.644: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-1073a1bc-5fe6-4af1-9345-f331e99cb157 container projected-configmap-volume-test: STEP: delete the pod Jun 2 23:42:13.706: INFO: Waiting for pod pod-projected-configmaps-1073a1bc-5fe6-4af1-9345-f331e99cb157 to disappear Jun 2 23:42:13.713: INFO: Pod pod-projected-configmaps-1073a1bc-5fe6-4af1-9345-f331e99cb157 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:42:13.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4862" for this suite. • [SLOW TEST:6.530 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":23,"skipped":260,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:42:13.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0602 23:42:14.923120 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 2 23:42:14.923: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:42:14.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5529" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":24,"skipped":277,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:42:14.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-f5e18326-8e31-4622-a699-5b66f2a7b2d9 STEP: Creating a pod to test consume configMaps Jun 2 23:42:15.081: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6d64cd77-d275-44c1-9664-fcb4beb9e737" in namespace "projected-3186" to be "Succeeded or Failed" Jun 2 23:42:15.100: INFO: Pod "pod-projected-configmaps-6d64cd77-d275-44c1-9664-fcb4beb9e737": Phase="Pending", Reason="", readiness=false. Elapsed: 19.195149ms Jun 2 23:42:17.104: INFO: Pod "pod-projected-configmaps-6d64cd77-d275-44c1-9664-fcb4beb9e737": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02325698s Jun 2 23:42:19.406: INFO: Pod "pod-projected-configmaps-6d64cd77-d275-44c1-9664-fcb4beb9e737": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324506503s Jun 2 23:42:21.410: INFO: Pod "pod-projected-configmaps-6d64cd77-d275-44c1-9664-fcb4beb9e737": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.329023002s STEP: Saw pod success Jun 2 23:42:21.410: INFO: Pod "pod-projected-configmaps-6d64cd77-d275-44c1-9664-fcb4beb9e737" satisfied condition "Succeeded or Failed" Jun 2 23:42:21.414: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-6d64cd77-d275-44c1-9664-fcb4beb9e737 container projected-configmap-volume-test: STEP: delete the pod Jun 2 23:42:21.547: INFO: Waiting for pod pod-projected-configmaps-6d64cd77-d275-44c1-9664-fcb4beb9e737 to disappear Jun 2 23:42:21.551: INFO: Pod pod-projected-configmaps-6d64cd77-d275-44c1-9664-fcb4beb9e737 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:42:21.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3186" for this suite. • [SLOW TEST:6.628 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":25,"skipped":318,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:42:21.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-602.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-602.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-602.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-602.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-602.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-602.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-602.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-602.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-602.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-602.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 2 23:42:30.038: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:30.041: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:30.044: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:30.047: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:30.057: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:30.060: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:30.064: INFO: Unable to read jessie_udp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:30.067: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:30.074: INFO: Lookups using dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local wheezy_udp@dns-test-service-2.dns-602.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-602.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local jessie_udp@dns-test-service-2.dns-602.svc.cluster.local jessie_tcp@dns-test-service-2.dns-602.svc.cluster.local] Jun 2 23:42:35.079: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:35.083: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:35.087: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:35.091: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:35.101: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:35.104: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:35.108: INFO: Unable to read jessie_udp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:35.111: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:35.117: INFO: Lookups using dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local wheezy_udp@dns-test-service-2.dns-602.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-602.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local jessie_udp@dns-test-service-2.dns-602.svc.cluster.local jessie_tcp@dns-test-service-2.dns-602.svc.cluster.local] Jun 2 23:42:40.078: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:40.080: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:40.083: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:40.086: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:40.094: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:40.097: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:40.100: INFO: Unable to read jessie_udp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:40.103: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:40.108: INFO: Lookups using dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local wheezy_udp@dns-test-service-2.dns-602.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-602.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local jessie_udp@dns-test-service-2.dns-602.svc.cluster.local jessie_tcp@dns-test-service-2.dns-602.svc.cluster.local] Jun 2 23:42:45.080: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:45.083: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:45.087: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:45.090: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:45.100: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:45.104: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:45.107: INFO: Unable to read jessie_udp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:45.111: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:45.117: INFO: Lookups using dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local wheezy_udp@dns-test-service-2.dns-602.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-602.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local jessie_udp@dns-test-service-2.dns-602.svc.cluster.local jessie_tcp@dns-test-service-2.dns-602.svc.cluster.local] Jun 2 23:42:50.098: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:50.101: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:50.104: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:50.107: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:50.116: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:50.119: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:50.122: INFO: Unable to read jessie_udp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:50.124: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:50.131: INFO: Lookups using dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local wheezy_udp@dns-test-service-2.dns-602.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-602.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local jessie_udp@dns-test-service-2.dns-602.svc.cluster.local jessie_tcp@dns-test-service-2.dns-602.svc.cluster.local] Jun 2 23:42:55.079: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:55.083: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:55.087: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:55.090: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:55.102: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:55.105: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:55.108: INFO: Unable to read jessie_udp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:55.111: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-602.svc.cluster.local from pod dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea: the server could not find the requested resource (get pods dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea) Jun 2 23:42:55.117: INFO: Lookups using dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local wheezy_udp@dns-test-service-2.dns-602.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-602.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-602.svc.cluster.local jessie_udp@dns-test-service-2.dns-602.svc.cluster.local jessie_tcp@dns-test-service-2.dns-602.svc.cluster.local] Jun 2 23:43:00.116: INFO: DNS probes using dns-602/dns-test-0d6259f5-503c-48e6-9a09-f712e57fc2ea succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:43:00.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-602" for this suite. • [SLOW TEST:39.175 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":26,"skipped":350,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:43:00.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-77b16757-4641-4d93-8a59-719641047451 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:43:00.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1278" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":27,"skipped":379,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:43:00.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 2 23:43:09.225: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 2 23:43:09.243: INFO: Pod pod-with-poststart-http-hook still exists Jun 2 23:43:11.243: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 2 23:43:11.247: INFO: Pod pod-with-poststart-http-hook still exists Jun 2 23:43:13.243: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 2 23:43:13.248: INFO: Pod pod-with-poststart-http-hook still exists Jun 2 23:43:15.243: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 2 23:43:15.280: INFO: Pod pod-with-poststart-http-hook still exists Jun 2 23:43:17.243: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 2 23:43:17.247: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:43:17.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3651" for this suite. • [SLOW TEST:16.407 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":28,"skipped":396,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:43:17.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Jun 2 23:43:17.323: INFO: Waiting up to 5m0s for pod "var-expansion-532c887e-0f46-41e3-9cb9-04e5ba4540a9" in namespace "var-expansion-1739" to be "Succeeded or Failed" Jun 2 23:43:17.381: INFO: Pod "var-expansion-532c887e-0f46-41e3-9cb9-04e5ba4540a9": Phase="Pending", Reason="", readiness=false. Elapsed: 58.602886ms Jun 2 23:43:19.586: INFO: Pod "var-expansion-532c887e-0f46-41e3-9cb9-04e5ba4540a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.262724814s Jun 2 23:43:21.590: INFO: Pod "var-expansion-532c887e-0f46-41e3-9cb9-04e5ba4540a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.266883917s STEP: Saw pod success Jun 2 23:43:21.590: INFO: Pod "var-expansion-532c887e-0f46-41e3-9cb9-04e5ba4540a9" satisfied condition "Succeeded or Failed" Jun 2 23:43:21.593: INFO: Trying to get logs from node latest-worker2 pod var-expansion-532c887e-0f46-41e3-9cb9-04e5ba4540a9 container dapi-container: STEP: delete the pod Jun 2 23:43:21.627: INFO: Waiting for pod var-expansion-532c887e-0f46-41e3-9cb9-04e5ba4540a9 to disappear Jun 2 23:43:21.773: INFO: Pod var-expansion-532c887e-0f46-41e3-9cb9-04e5ba4540a9 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:43:21.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1739" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":29,"skipped":401,"failed":0} ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:43:21.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 2 23:43:21.837: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-699082dc-2da7-4d52-9de8-46e5dc0356c0" in namespace "security-context-test-4538" to be "Succeeded or Failed" Jun 2 23:43:21.842: INFO: Pod "busybox-privileged-false-699082dc-2da7-4d52-9de8-46e5dc0356c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072862ms Jun 2 23:43:23.845: INFO: Pod "busybox-privileged-false-699082dc-2da7-4d52-9de8-46e5dc0356c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007790927s Jun 2 23:43:25.851: INFO: Pod "busybox-privileged-false-699082dc-2da7-4d52-9de8-46e5dc0356c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013568015s Jun 2 23:43:25.851: INFO: Pod "busybox-privileged-false-699082dc-2da7-4d52-9de8-46e5dc0356c0" satisfied condition "Succeeded or Failed" Jun 2 23:43:25.858: INFO: Got logs for pod "busybox-privileged-false-699082dc-2da7-4d52-9de8-46e5dc0356c0": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:43:25.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4538" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":30,"skipped":401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:43:25.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-4f395b19-c3b5-410a-9796-a26a0c2906c7 STEP: Creating a pod to test consume configMaps Jun 2 23:43:25.959: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cf9aed52-f957-4731-a7b4-f07568aaccf4" in namespace "projected-3075" to be "Succeeded or Failed" Jun 2 23:43:25.977: INFO: Pod "pod-projected-configmaps-cf9aed52-f957-4731-a7b4-f07568aaccf4": Phase="Pending", Reason="", readiness=false. Elapsed: 17.988625ms Jun 2 23:43:27.981: INFO: Pod "pod-projected-configmaps-cf9aed52-f957-4731-a7b4-f07568aaccf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022528654s Jun 2 23:43:29.990: INFO: Pod "pod-projected-configmaps-cf9aed52-f957-4731-a7b4-f07568aaccf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030777228s STEP: Saw pod success Jun 2 23:43:29.990: INFO: Pod "pod-projected-configmaps-cf9aed52-f957-4731-a7b4-f07568aaccf4" satisfied condition "Succeeded or Failed" Jun 2 23:43:29.992: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-cf9aed52-f957-4731-a7b4-f07568aaccf4 container projected-configmap-volume-test: STEP: delete the pod Jun 2 23:43:30.097: INFO: Waiting for pod pod-projected-configmaps-cf9aed52-f957-4731-a7b4-f07568aaccf4 to disappear Jun 2 23:43:30.159: INFO: Pod pod-projected-configmaps-cf9aed52-f957-4731-a7b4-f07568aaccf4 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:43:30.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3075" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":31,"skipped":430,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:43:30.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-5efaa267-d18e-4e58-911f-b492511d4282 STEP: Creating secret with name s-test-opt-upd-3a016572-0768-469b-a979-4d0a73990a56 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5efaa267-d18e-4e58-911f-b492511d4282 STEP: Updating secret s-test-opt-upd-3a016572-0768-469b-a979-4d0a73990a56 STEP: Creating secret with name s-test-opt-create-9ea60827-d3a3-4e13-98bf-4c0d2ec28150 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:43:38.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3693" for this suite. • [SLOW TEST:8.255 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":32,"skipped":443,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:43:38.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 2 23:43:38.479: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86183ae0-0d51-4920-8191-39cf380be4fd" in namespace "downward-api-4283" to be "Succeeded or Failed" Jun 2 23:43:38.483: INFO: Pod "downwardapi-volume-86183ae0-0d51-4920-8191-39cf380be4fd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.288688ms Jun 2 23:43:40.487: INFO: Pod "downwardapi-volume-86183ae0-0d51-4920-8191-39cf380be4fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007456356s Jun 2 23:43:42.491: INFO: Pod "downwardapi-volume-86183ae0-0d51-4920-8191-39cf380be4fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011878317s STEP: Saw pod success Jun 2 23:43:42.491: INFO: Pod "downwardapi-volume-86183ae0-0d51-4920-8191-39cf380be4fd" satisfied condition "Succeeded or Failed" Jun 2 23:43:42.495: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-86183ae0-0d51-4920-8191-39cf380be4fd container client-container: STEP: delete the pod Jun 2 23:43:42.527: INFO: Waiting for pod downwardapi-volume-86183ae0-0d51-4920-8191-39cf380be4fd to disappear Jun 2 23:43:42.537: INFO: Pod downwardapi-volume-86183ae0-0d51-4920-8191-39cf380be4fd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:43:42.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4283" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":33,"skipped":492,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:43:42.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-d5cc9052-abb5-4b69-8519-3931f6be3682 STEP: Creating a pod to test consume secrets Jun 2 23:43:42.636: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-15987671-d17d-4f26-80f2-f1f93bf5eaa1" in namespace "projected-2538" to be "Succeeded or Failed" Jun 2 23:43:42.670: INFO: Pod "pod-projected-secrets-15987671-d17d-4f26-80f2-f1f93bf5eaa1": Phase="Pending", Reason="", readiness=false. Elapsed: 34.518785ms Jun 2 23:43:44.783: INFO: Pod "pod-projected-secrets-15987671-d17d-4f26-80f2-f1f93bf5eaa1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147220171s Jun 2 23:43:46.825: INFO: Pod "pod-projected-secrets-15987671-d17d-4f26-80f2-f1f93bf5eaa1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189057621s Jun 2 23:43:48.830: INFO: Pod "pod-projected-secrets-15987671-d17d-4f26-80f2-f1f93bf5eaa1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.194158163s STEP: Saw pod success Jun 2 23:43:48.830: INFO: Pod "pod-projected-secrets-15987671-d17d-4f26-80f2-f1f93bf5eaa1" satisfied condition "Succeeded or Failed" Jun 2 23:43:48.834: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-15987671-d17d-4f26-80f2-f1f93bf5eaa1 container projected-secret-volume-test: STEP: delete the pod Jun 2 23:43:48.882: INFO: Waiting for pod pod-projected-secrets-15987671-d17d-4f26-80f2-f1f93bf5eaa1 to disappear Jun 2 23:43:48.896: INFO: Pod pod-projected-secrets-15987671-d17d-4f26-80f2-f1f93bf5eaa1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:43:48.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2538" for this suite. • [SLOW TEST:6.357 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":34,"skipped":505,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:43:48.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:43:53.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5701" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":35,"skipped":531,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:43:53.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 2 23:43:53.843: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 2 23:43:55.853: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738233, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738233, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738233, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738233, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 2 23:43:58.887: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:43:58.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6914" for this suite. STEP: Destroying namespace "webhook-6914-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.953 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":36,"skipped":549,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:43:58.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-86299ac0-8fa9-44c2-8ca3-abebdadda587 in namespace container-probe-9960 Jun 2 23:44:03.118: INFO: Started pod liveness-86299ac0-8fa9-44c2-8ca3-abebdadda587 in namespace container-probe-9960 STEP: checking the pod's current state and verifying that restartCount is present Jun 2 23:44:03.121: INFO: Initial restart count of pod liveness-86299ac0-8fa9-44c2-8ca3-abebdadda587 is 0 Jun 2 23:44:17.339: INFO: Restart count of pod container-probe-9960/liveness-86299ac0-8fa9-44c2-8ca3-abebdadda587 is now 1 (14.217875612s elapsed) Jun 2 23:44:37.418: INFO: Restart count of pod container-probe-9960/liveness-86299ac0-8fa9-44c2-8ca3-abebdadda587 is now 2 (34.296942374s elapsed) Jun 2 23:44:57.491: INFO: Restart count of pod container-probe-9960/liveness-86299ac0-8fa9-44c2-8ca3-abebdadda587 is now 3 (54.369947729s elapsed) Jun 2 23:45:17.534: INFO: Restart count of pod container-probe-9960/liveness-86299ac0-8fa9-44c2-8ca3-abebdadda587 is now 4 (1m14.412848546s elapsed) Jun 2 23:46:31.710: INFO: Restart count of pod container-probe-9960/liveness-86299ac0-8fa9-44c2-8ca3-abebdadda587 is now 5 (2m28.588970047s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:46:31.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9960" for this suite. • [SLOW TEST:152.798 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":37,"skipped":559,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:46:31.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7483 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7483 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7483 Jun 2 23:46:31.972: INFO: Found 0 stateful pods, waiting for 1 Jun 2 23:46:41.977: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 2 23:46:41.981: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 2 23:46:45.454: INFO: stderr: "I0602 23:46:45.339258 33 log.go:172] (0xc00003a8f0) (0xc00089b720) Create stream\nI0602 23:46:45.339320 33 log.go:172] (0xc00003a8f0) (0xc00089b720) Stream added, broadcasting: 1\nI0602 23:46:45.341894 33 log.go:172] (0xc00003a8f0) Reply frame received for 1\nI0602 23:46:45.341943 33 log.go:172] (0xc00003a8f0) (0xc000760000) Create stream\nI0602 23:46:45.341954 33 log.go:172] (0xc00003a8f0) (0xc000760000) Stream added, broadcasting: 3\nI0602 23:46:45.343152 33 log.go:172] (0xc00003a8f0) Reply frame received for 3\nI0602 23:46:45.343217 33 log.go:172] (0xc00003a8f0) (0xc0007366e0) Create stream\nI0602 23:46:45.343255 33 log.go:172] (0xc00003a8f0) (0xc0007366e0) Stream added, broadcasting: 5\nI0602 23:46:45.344292 33 log.go:172] (0xc00003a8f0) Reply frame received for 5\nI0602 23:46:45.410145 33 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0602 23:46:45.410177 33 log.go:172] (0xc0007366e0) (5) Data frame handling\nI0602 23:46:45.410202 33 log.go:172] (0xc0007366e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0602 23:46:45.445310 33 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0602 23:46:45.445375 33 log.go:172] (0xc0007366e0) (5) Data frame handling\nI0602 23:46:45.445414 33 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0602 23:46:45.445443 33 log.go:172] (0xc000760000) (3) Data frame handling\nI0602 23:46:45.445467 33 log.go:172] (0xc000760000) (3) Data frame sent\nI0602 23:46:45.445581 33 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0602 23:46:45.445596 33 log.go:172] (0xc000760000) (3) Data frame handling\nI0602 23:46:45.447614 33 log.go:172] (0xc00003a8f0) Data frame received for 1\nI0602 23:46:45.447634 33 log.go:172] (0xc00089b720) (1) Data frame handling\nI0602 23:46:45.447653 33 log.go:172] (0xc00089b720) (1) Data frame sent\nI0602 23:46:45.447684 33 log.go:172] (0xc00003a8f0) (0xc00089b720) Stream removed, broadcasting: 1\nI0602 23:46:45.447703 33 log.go:172] (0xc00003a8f0) Go away received\nI0602 23:46:45.447940 33 log.go:172] (0xc00003a8f0) (0xc00089b720) Stream removed, broadcasting: 1\nI0602 23:46:45.447952 33 log.go:172] (0xc00003a8f0) (0xc000760000) Stream removed, broadcasting: 3\nI0602 23:46:45.447958 33 log.go:172] (0xc00003a8f0) (0xc0007366e0) Stream removed, broadcasting: 5\n" Jun 2 23:46:45.454: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 2 23:46:45.454: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 2 23:46:45.458: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 2 23:46:55.463: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 2 23:46:55.463: INFO: Waiting for statefulset status.replicas updated to 0 Jun 2 23:46:55.503: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999714s Jun 2 23:46:56.509: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.970629797s Jun 2 23:46:57.512: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.965137266s Jun 2 23:46:58.517: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.962028074s Jun 2 23:46:59.522: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.956825229s Jun 2 23:47:00.527: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.952423517s Jun 2 23:47:01.532: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.947011844s Jun 2 23:47:02.537: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.941974725s Jun 2 23:47:03.543: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.936782342s Jun 2 23:47:04.547: INFO: Verifying statefulset ss doesn't scale past 1 for another 931.447307ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7483 Jun 2 23:47:05.552: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 2 23:47:05.797: INFO: stderr: "I0602 23:47:05.708471 62 log.go:172] (0xc000948e70) (0xc000a485a0) Create stream\nI0602 23:47:05.708534 62 log.go:172] (0xc000948e70) (0xc000a485a0) Stream added, broadcasting: 1\nI0602 23:47:05.713641 62 log.go:172] (0xc000948e70) Reply frame received for 1\nI0602 23:47:05.713693 62 log.go:172] (0xc000948e70) (0xc000802dc0) Create stream\nI0602 23:47:05.713708 62 log.go:172] (0xc000948e70) (0xc000802dc0) Stream added, broadcasting: 3\nI0602 23:47:05.714756 62 log.go:172] (0xc000948e70) Reply frame received for 3\nI0602 23:47:05.714800 62 log.go:172] (0xc000948e70) (0xc000668140) Create stream\nI0602 23:47:05.714815 62 log.go:172] (0xc000948e70) (0xc000668140) Stream added, broadcasting: 5\nI0602 23:47:05.715745 62 log.go:172] (0xc000948e70) Reply frame received for 5\nI0602 23:47:05.789378 62 log.go:172] (0xc000948e70) Data frame received for 3\nI0602 23:47:05.789414 62 log.go:172] (0xc000802dc0) (3) Data frame handling\nI0602 23:47:05.789428 62 log.go:172] (0xc000802dc0) (3) Data frame sent\nI0602 23:47:05.789492 62 log.go:172] (0xc000948e70) Data frame received for 3\nI0602 23:47:05.789510 62 log.go:172] (0xc000802dc0) (3) Data frame handling\nI0602 23:47:05.789556 62 log.go:172] (0xc000948e70) Data frame received for 5\nI0602 23:47:05.789586 62 log.go:172] (0xc000668140) (5) Data frame handling\nI0602 23:47:05.789608 62 log.go:172] (0xc000668140) (5) Data frame sent\nI0602 23:47:05.789622 62 log.go:172] (0xc000948e70) Data frame received for 5\nI0602 23:47:05.789638 62 log.go:172] (0xc000668140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0602 23:47:05.791068 62 log.go:172] (0xc000948e70) Data frame received for 1\nI0602 23:47:05.791080 62 log.go:172] (0xc000a485a0) (1) Data frame handling\nI0602 23:47:05.791087 62 log.go:172] (0xc000a485a0) (1) Data frame sent\nI0602 23:47:05.791095 62 log.go:172] (0xc000948e70) (0xc000a485a0) Stream removed, broadcasting: 1\nI0602 23:47:05.791332 62 log.go:172] (0xc000948e70) (0xc000a485a0) Stream removed, broadcasting: 1\nI0602 23:47:05.791346 62 log.go:172] (0xc000948e70) (0xc000802dc0) Stream removed, broadcasting: 3\nI0602 23:47:05.791481 62 log.go:172] (0xc000948e70) Go away received\nI0602 23:47:05.791528 62 log.go:172] (0xc000948e70) (0xc000668140) Stream removed, broadcasting: 5\n" Jun 2 23:47:05.797: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 2 23:47:05.797: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 2 23:47:05.801: INFO: Found 1 stateful pods, waiting for 3 Jun 2 23:47:15.806: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 2 23:47:15.806: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 2 23:47:15.806: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 2 23:47:15.837: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 2 23:47:16.077: INFO: stderr: "I0602 23:47:15.980000 85 log.go:172] (0xc0002fdc30) (0xc000029e00) Create stream\nI0602 23:47:15.980060 85 log.go:172] (0xc0002fdc30) (0xc000029e00) Stream added, broadcasting: 1\nI0602 23:47:15.983030 85 log.go:172] (0xc0002fdc30) Reply frame received for 1\nI0602 23:47:15.983083 85 log.go:172] (0xc0002fdc30) (0xc0002a2dc0) Create stream\nI0602 23:47:15.983096 85 log.go:172] (0xc0002fdc30) (0xc0002a2dc0) Stream added, broadcasting: 3\nI0602 23:47:15.984338 85 log.go:172] (0xc0002fdc30) Reply frame received for 3\nI0602 23:47:15.984371 85 log.go:172] (0xc0002fdc30) (0xc00014d720) Create stream\nI0602 23:47:15.984381 85 log.go:172] (0xc0002fdc30) (0xc00014d720) Stream added, broadcasting: 5\nI0602 23:47:15.985598 85 log.go:172] (0xc0002fdc30) Reply frame received for 5\nI0602 23:47:16.068361 85 log.go:172] (0xc0002fdc30) Data frame received for 3\nI0602 23:47:16.068395 85 log.go:172] (0xc0002a2dc0) (3) Data frame handling\nI0602 23:47:16.068403 85 log.go:172] (0xc0002a2dc0) (3) Data frame sent\nI0602 23:47:16.068410 85 log.go:172] (0xc0002fdc30) Data frame received for 3\nI0602 23:47:16.068418 85 log.go:172] (0xc0002a2dc0) (3) Data frame handling\nI0602 23:47:16.068426 85 log.go:172] (0xc0002fdc30) Data frame received for 5\nI0602 23:47:16.068430 85 log.go:172] (0xc00014d720) (5) Data frame handling\nI0602 23:47:16.068435 85 log.go:172] (0xc00014d720) (5) Data frame sent\nI0602 23:47:16.068440 85 log.go:172] (0xc0002fdc30) Data frame received for 5\nI0602 23:47:16.068444 85 log.go:172] (0xc00014d720) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0602 23:47:16.070075 85 log.go:172] (0xc0002fdc30) Data frame received for 1\nI0602 23:47:16.070120 85 log.go:172] (0xc000029e00) (1) Data frame handling\nI0602 23:47:16.070161 85 log.go:172] (0xc000029e00) (1) Data frame sent\nI0602 23:47:16.070218 85 log.go:172] (0xc0002fdc30) (0xc000029e00) Stream removed, broadcasting: 1\nI0602 23:47:16.070248 85 log.go:172] (0xc0002fdc30) Go away received\nI0602 23:47:16.070646 85 log.go:172] (0xc0002fdc30) (0xc000029e00) Stream removed, broadcasting: 1\nI0602 23:47:16.070671 85 log.go:172] (0xc0002fdc30) (0xc0002a2dc0) Stream removed, broadcasting: 3\nI0602 23:47:16.070684 85 log.go:172] (0xc0002fdc30) (0xc00014d720) Stream removed, broadcasting: 5\n" Jun 2 23:47:16.077: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 2 23:47:16.077: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 2 23:47:16.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 2 23:47:16.338: INFO: stderr: "I0602 23:47:16.216246 105 log.go:172] (0xc00003ae70) (0xc000138e60) Create stream\nI0602 23:47:16.216301 105 log.go:172] (0xc00003ae70) (0xc000138e60) Stream added, broadcasting: 1\nI0602 23:47:16.219265 105 log.go:172] (0xc00003ae70) Reply frame received for 1\nI0602 23:47:16.219320 105 log.go:172] (0xc00003ae70) (0xc000238a00) Create stream\nI0602 23:47:16.219334 105 log.go:172] (0xc00003ae70) (0xc000238a00) Stream added, broadcasting: 3\nI0602 23:47:16.220377 105 log.go:172] (0xc00003ae70) Reply frame received for 3\nI0602 23:47:16.220413 105 log.go:172] (0xc00003ae70) (0xc0001399a0) Create stream\nI0602 23:47:16.220422 105 log.go:172] (0xc00003ae70) (0xc0001399a0) Stream added, broadcasting: 5\nI0602 23:47:16.222153 105 log.go:172] (0xc00003ae70) Reply frame received for 5\nI0602 23:47:16.295933 105 log.go:172] (0xc00003ae70) Data frame received for 5\nI0602 23:47:16.295958 105 log.go:172] (0xc0001399a0) (5) Data frame handling\nI0602 23:47:16.295970 105 log.go:172] (0xc0001399a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0602 23:47:16.329248 105 log.go:172] (0xc00003ae70) Data frame received for 3\nI0602 23:47:16.329275 105 log.go:172] (0xc000238a00) (3) Data frame handling\nI0602 23:47:16.329289 105 log.go:172] (0xc000238a00) (3) Data frame sent\nI0602 23:47:16.329767 105 log.go:172] (0xc00003ae70) Data frame received for 3\nI0602 23:47:16.329779 105 log.go:172] (0xc000238a00) (3) Data frame handling\nI0602 23:47:16.329849 105 log.go:172] (0xc00003ae70) Data frame received for 5\nI0602 23:47:16.329895 105 log.go:172] (0xc0001399a0) (5) Data frame handling\nI0602 23:47:16.331780 105 log.go:172] (0xc00003ae70) Data frame received for 1\nI0602 23:47:16.331793 105 log.go:172] (0xc000138e60) (1) Data frame handling\nI0602 23:47:16.331803 105 log.go:172] (0xc000138e60) (1) Data frame sent\nI0602 23:47:16.331810 105 log.go:172] (0xc00003ae70) (0xc000138e60) Stream removed, broadcasting: 1\nI0602 23:47:16.331904 105 log.go:172] (0xc00003ae70) Go away received\nI0602 23:47:16.332031 105 log.go:172] (0xc00003ae70) (0xc000138e60) Stream removed, broadcasting: 1\nI0602 23:47:16.332041 105 log.go:172] (0xc00003ae70) (0xc000238a00) Stream removed, broadcasting: 3\nI0602 23:47:16.332046 105 log.go:172] (0xc00003ae70) (0xc0001399a0) Stream removed, broadcasting: 5\n" Jun 2 23:47:16.338: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 2 23:47:16.338: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 2 23:47:16.338: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 2 23:47:16.590: INFO: stderr: "I0602 23:47:16.490194 128 log.go:172] (0xc00003ac60) (0xc000276140) Create stream\nI0602 23:47:16.490260 128 log.go:172] (0xc00003ac60) (0xc000276140) Stream added, broadcasting: 1\nI0602 23:47:16.494174 128 log.go:172] (0xc00003ac60) Reply frame received for 1\nI0602 23:47:16.494225 128 log.go:172] (0xc00003ac60) (0xc000540fa0) Create stream\nI0602 23:47:16.494242 128 log.go:172] (0xc00003ac60) (0xc000540fa0) Stream added, broadcasting: 3\nI0602 23:47:16.495401 128 log.go:172] (0xc00003ac60) Reply frame received for 3\nI0602 23:47:16.495444 128 log.go:172] (0xc00003ac60) (0xc0004085a0) Create stream\nI0602 23:47:16.495470 128 log.go:172] (0xc00003ac60) (0xc0004085a0) Stream added, broadcasting: 5\nI0602 23:47:16.496467 128 log.go:172] (0xc00003ac60) Reply frame received for 5\nI0602 23:47:16.558280 128 log.go:172] (0xc00003ac60) Data frame received for 5\nI0602 23:47:16.558327 128 log.go:172] (0xc0004085a0) (5) Data frame handling\nI0602 23:47:16.558364 128 log.go:172] (0xc0004085a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0602 23:47:16.581289 128 log.go:172] (0xc00003ac60) Data frame received for 3\nI0602 23:47:16.581406 128 log.go:172] (0xc000540fa0) (3) Data frame handling\nI0602 23:47:16.581434 128 log.go:172] (0xc000540fa0) (3) Data frame sent\nI0602 23:47:16.581449 128 log.go:172] (0xc00003ac60) Data frame received for 3\nI0602 23:47:16.581460 128 log.go:172] (0xc000540fa0) (3) Data frame handling\nI0602 23:47:16.581824 128 log.go:172] (0xc00003ac60) Data frame received for 5\nI0602 23:47:16.581859 128 log.go:172] (0xc0004085a0) (5) Data frame handling\nI0602 23:47:16.583928 128 log.go:172] (0xc00003ac60) Data frame received for 1\nI0602 23:47:16.583953 128 log.go:172] (0xc000276140) (1) Data frame handling\nI0602 23:47:16.584003 128 log.go:172] (0xc000276140) (1) Data frame sent\nI0602 23:47:16.584024 128 log.go:172] (0xc00003ac60) (0xc000276140) Stream removed, broadcasting: 1\nI0602 23:47:16.584091 128 log.go:172] (0xc00003ac60) Go away received\nI0602 23:47:16.584391 128 log.go:172] (0xc00003ac60) (0xc000276140) Stream removed, broadcasting: 1\nI0602 23:47:16.584409 128 log.go:172] (0xc00003ac60) (0xc000540fa0) Stream removed, broadcasting: 3\nI0602 23:47:16.584421 128 log.go:172] (0xc00003ac60) (0xc0004085a0) Stream removed, broadcasting: 5\n" Jun 2 23:47:16.590: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 2 23:47:16.590: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 2 23:47:16.590: INFO: Waiting for statefulset status.replicas updated to 0 Jun 2 23:47:16.593: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 2 23:47:26.602: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 2 23:47:26.602: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 2 23:47:26.602: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 2 23:47:26.640: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999613s Jun 2 23:47:27.645: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.970208106s Jun 2 23:47:28.651: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.964666051s Jun 2 23:47:29.656: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.95935892s Jun 2 23:47:30.662: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.953974479s Jun 2 23:47:31.667: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.94775645s Jun 2 23:47:32.673: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.942532545s Jun 2 23:47:33.678: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.936852374s Jun 2 23:47:34.684: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.931895679s Jun 2 23:47:35.690: INFO: Verifying statefulset ss doesn't scale past 3 for another 926.299388ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7483 Jun 2 23:47:36.786: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 2 23:47:37.169: INFO: stderr: "I0602 23:47:37.080516 148 log.go:172] (0xc00003a580) (0xc000704460) Create stream\nI0602 23:47:37.080594 148 log.go:172] (0xc00003a580) (0xc000704460) Stream added, broadcasting: 1\nI0602 23:47:37.082317 148 log.go:172] (0xc00003a580) Reply frame received for 1\nI0602 23:47:37.082345 148 log.go:172] (0xc00003a580) (0xc000705400) Create stream\nI0602 23:47:37.082351 148 log.go:172] (0xc00003a580) (0xc000705400) Stream added, broadcasting: 3\nI0602 23:47:37.083190 148 log.go:172] (0xc00003a580) Reply frame received for 3\nI0602 23:47:37.083231 148 log.go:172] (0xc00003a580) (0xc0006ee5a0) Create stream\nI0602 23:47:37.083246 148 log.go:172] (0xc00003a580) (0xc0006ee5a0) Stream added, broadcasting: 5\nI0602 23:47:37.084093 148 log.go:172] (0xc00003a580) Reply frame received for 5\nI0602 23:47:37.162698 148 log.go:172] (0xc00003a580) Data frame received for 5\nI0602 23:47:37.162754 148 log.go:172] (0xc0006ee5a0) (5) Data frame handling\nI0602 23:47:37.162772 148 log.go:172] (0xc0006ee5a0) (5) Data frame sent\nI0602 23:47:37.162785 148 log.go:172] (0xc00003a580) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0602 23:47:37.162799 148 log.go:172] (0xc0006ee5a0) (5) Data frame handling\nI0602 23:47:37.162828 148 log.go:172] (0xc00003a580) Data frame received for 3\nI0602 23:47:37.162843 148 log.go:172] (0xc000705400) (3) Data frame handling\nI0602 23:47:37.162855 148 log.go:172] (0xc000705400) (3) Data frame sent\nI0602 23:47:37.162872 148 log.go:172] (0xc00003a580) Data frame received for 3\nI0602 23:47:37.162884 148 log.go:172] (0xc000705400) (3) Data frame handling\nI0602 23:47:37.164325 148 log.go:172] (0xc00003a580) Data frame received for 1\nI0602 23:47:37.164379 148 log.go:172] (0xc000704460) (1) Data frame handling\nI0602 23:47:37.164393 148 log.go:172] (0xc000704460) (1) Data frame sent\nI0602 23:47:37.164402 148 log.go:172] (0xc00003a580) (0xc000704460) Stream removed, broadcasting: 1\nI0602 23:47:37.164419 148 log.go:172] (0xc00003a580) Go away received\nI0602 23:47:37.164803 148 log.go:172] (0xc00003a580) (0xc000704460) Stream removed, broadcasting: 1\nI0602 23:47:37.164826 148 log.go:172] (0xc00003a580) (0xc000705400) Stream removed, broadcasting: 3\nI0602 23:47:37.164838 148 log.go:172] (0xc00003a580) (0xc0006ee5a0) Stream removed, broadcasting: 5\n" Jun 2 23:47:37.169: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 2 23:47:37.169: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 2 23:47:37.169: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 2 23:47:37.430: INFO: stderr: "I0602 23:47:37.303918 169 log.go:172] (0xc00003a0b0) (0xc0005ca500) Create stream\nI0602 23:47:37.303980 169 log.go:172] (0xc00003a0b0) (0xc0005ca500) Stream added, broadcasting: 1\nI0602 23:47:37.307113 169 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0602 23:47:37.307154 169 log.go:172] (0xc00003a0b0) (0xc0005301e0) Create stream\nI0602 23:47:37.307167 169 log.go:172] (0xc00003a0b0) (0xc0005301e0) Stream added, broadcasting: 3\nI0602 23:47:37.308204 169 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0602 23:47:37.308233 169 log.go:172] (0xc00003a0b0) (0xc0005caa00) Create stream\nI0602 23:47:37.308243 169 log.go:172] (0xc00003a0b0) (0xc0005caa00) Stream added, broadcasting: 5\nI0602 23:47:37.309576 169 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0602 23:47:37.423262 169 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0602 23:47:37.423312 169 log.go:172] (0xc0005caa00) (5) Data frame handling\nI0602 23:47:37.423332 169 log.go:172] (0xc0005caa00) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0602 23:47:37.423356 169 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0602 23:47:37.423371 169 log.go:172] (0xc0005301e0) (3) Data frame handling\nI0602 23:47:37.423386 169 log.go:172] (0xc0005301e0) (3) Data frame sent\nI0602 23:47:37.423417 169 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0602 23:47:37.423432 169 log.go:172] (0xc0005301e0) (3) Data frame handling\nI0602 23:47:37.423760 169 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0602 23:47:37.423780 169 log.go:172] (0xc0005caa00) (5) Data frame handling\nI0602 23:47:37.424748 169 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0602 23:47:37.424778 169 log.go:172] (0xc0005ca500) (1) Data frame handling\nI0602 23:47:37.424798 169 log.go:172] (0xc0005ca500) (1) Data frame sent\nI0602 23:47:37.425424 169 log.go:172] (0xc00003a0b0) (0xc0005ca500) Stream removed, broadcasting: 1\nI0602 23:47:37.425449 169 log.go:172] (0xc00003a0b0) Go away received\nI0602 23:47:37.425715 169 log.go:172] (0xc00003a0b0) (0xc0005ca500) Stream removed, broadcasting: 1\nI0602 23:47:37.425727 169 log.go:172] (0xc00003a0b0) (0xc0005301e0) Stream removed, broadcasting: 3\nI0602 23:47:37.425732 169 log.go:172] (0xc00003a0b0) (0xc0005caa00) Stream removed, broadcasting: 5\n" Jun 2 23:47:37.430: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 2 23:47:37.430: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 2 23:47:37.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 2 23:47:37.673: INFO: stderr: "I0602 23:47:37.599706 191 log.go:172] (0xc0009a9ad0) (0xc000684fa0) Create stream\nI0602 23:47:37.599746 191 log.go:172] (0xc0009a9ad0) (0xc000684fa0) Stream added, broadcasting: 1\nI0602 23:47:37.604424 191 log.go:172] (0xc0009a9ad0) Reply frame received for 1\nI0602 23:47:37.604465 191 log.go:172] (0xc0009a9ad0) (0xc0006255e0) Create stream\nI0602 23:47:37.604477 191 log.go:172] (0xc0009a9ad0) (0xc0006255e0) Stream added, broadcasting: 3\nI0602 23:47:37.605417 191 log.go:172] (0xc0009a9ad0) Reply frame received for 3\nI0602 23:47:37.605469 191 log.go:172] (0xc0009a9ad0) (0xc000616b40) Create stream\nI0602 23:47:37.605483 191 log.go:172] (0xc0009a9ad0) (0xc000616b40) Stream added, broadcasting: 5\nI0602 23:47:37.606208 191 log.go:172] (0xc0009a9ad0) Reply frame received for 5\nI0602 23:47:37.664569 191 log.go:172] (0xc0009a9ad0) Data frame received for 5\nI0602 23:47:37.664609 191 log.go:172] (0xc0009a9ad0) Data frame received for 3\nI0602 23:47:37.664637 191 log.go:172] (0xc0006255e0) (3) Data frame handling\nI0602 23:47:37.664654 191 log.go:172] (0xc0006255e0) (3) Data frame sent\nI0602 23:47:37.664662 191 log.go:172] (0xc0009a9ad0) Data frame received for 3\nI0602 23:47:37.664669 191 log.go:172] (0xc0006255e0) (3) Data frame handling\nI0602 23:47:37.664700 191 log.go:172] (0xc000616b40) (5) Data frame handling\nI0602 23:47:37.664713 191 log.go:172] (0xc000616b40) (5) Data frame sent\nI0602 23:47:37.664721 191 log.go:172] (0xc0009a9ad0) Data frame received for 5\nI0602 23:47:37.664729 191 log.go:172] (0xc000616b40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0602 23:47:37.666118 191 log.go:172] (0xc0009a9ad0) Data frame received for 1\nI0602 23:47:37.666139 191 log.go:172] (0xc000684fa0) (1) Data frame handling\nI0602 23:47:37.666149 191 log.go:172] (0xc000684fa0) (1) Data frame sent\nI0602 23:47:37.666164 191 log.go:172] (0xc0009a9ad0) (0xc000684fa0) Stream removed, broadcasting: 1\nI0602 23:47:37.666179 191 log.go:172] (0xc0009a9ad0) Go away received\nI0602 23:47:37.666559 191 log.go:172] (0xc0009a9ad0) (0xc000684fa0) Stream removed, broadcasting: 1\nI0602 23:47:37.666586 191 log.go:172] (0xc0009a9ad0) (0xc0006255e0) Stream removed, broadcasting: 3\nI0602 23:47:37.666598 191 log.go:172] (0xc0009a9ad0) (0xc000616b40) Stream removed, broadcasting: 5\n" Jun 2 23:47:37.673: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 2 23:47:37.673: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 2 23:47:37.673: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 2 23:47:57.707: INFO: Deleting all statefulset in ns statefulset-7483 Jun 2 23:47:57.709: INFO: Scaling statefulset ss to 0 Jun 2 23:47:57.716: INFO: Waiting for statefulset status.replicas updated to 0 Jun 2 23:47:57.718: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:47:57.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7483" for this suite. • [SLOW TEST:85.950 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":38,"skipped":563,"failed":0} [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:47:57.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 2 23:47:57.827: INFO: Waiting up to 5m0s for pod "pod-c3da5efe-e1ee-449a-92a3-eb227d754e8b" in namespace "emptydir-8345" to be "Succeeded or Failed" Jun 2 23:47:57.831: INFO: Pod "pod-c3da5efe-e1ee-449a-92a3-eb227d754e8b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.842531ms Jun 2 23:47:59.835: INFO: Pod "pod-c3da5efe-e1ee-449a-92a3-eb227d754e8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008078645s Jun 2 23:48:01.839: INFO: Pod "pod-c3da5efe-e1ee-449a-92a3-eb227d754e8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012364479s STEP: Saw pod success Jun 2 23:48:01.839: INFO: Pod "pod-c3da5efe-e1ee-449a-92a3-eb227d754e8b" satisfied condition "Succeeded or Failed" Jun 2 23:48:01.842: INFO: Trying to get logs from node latest-worker2 pod pod-c3da5efe-e1ee-449a-92a3-eb227d754e8b container test-container: STEP: delete the pod Jun 2 23:48:01.874: INFO: Waiting for pod pod-c3da5efe-e1ee-449a-92a3-eb227d754e8b to disappear Jun 2 23:48:01.892: INFO: Pod pod-c3da5efe-e1ee-449a-92a3-eb227d754e8b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:48:01.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8345" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":39,"skipped":563,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:48:01.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 2 23:48:01.970: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 2 23:48:03.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8087 create -f -' Jun 2 23:48:07.136: INFO: stderr: "" Jun 2 23:48:07.136: INFO: stdout: "e2e-test-crd-publish-openapi-5442-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 2 23:48:07.136: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8087 delete e2e-test-crd-publish-openapi-5442-crds test-cr' Jun 2 23:48:07.250: INFO: stderr: "" Jun 2 23:48:07.250: INFO: stdout: "e2e-test-crd-publish-openapi-5442-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jun 2 23:48:07.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8087 apply -f -' Jun 2 23:48:08.504: INFO: stderr: "" Jun 2 23:48:08.504: INFO: stdout: "e2e-test-crd-publish-openapi-5442-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 2 23:48:08.504: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8087 delete e2e-test-crd-publish-openapi-5442-crds test-cr' Jun 2 23:48:08.622: INFO: stderr: "" Jun 2 23:48:08.622: INFO: stdout: "e2e-test-crd-publish-openapi-5442-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 2 23:48:08.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5442-crds' Jun 2 23:48:08.867: INFO: stderr: "" Jun 2 23:48:08.867: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5442-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:48:11.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8087" for this suite. • [SLOW TEST:9.939 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":40,"skipped":569,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:48:11.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 2 23:48:11.914: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99d5da7d-65b3-42e8-b039-378d16d9c5a9" in namespace "downward-api-1794" to be "Succeeded or Failed" Jun 2 23:48:11.922: INFO: Pod "downwardapi-volume-99d5da7d-65b3-42e8-b039-378d16d9c5a9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.627378ms Jun 2 23:48:13.964: INFO: Pod "downwardapi-volume-99d5da7d-65b3-42e8-b039-378d16d9c5a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049854678s Jun 2 23:48:15.969: INFO: Pod "downwardapi-volume-99d5da7d-65b3-42e8-b039-378d16d9c5a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054259427s STEP: Saw pod success Jun 2 23:48:15.969: INFO: Pod "downwardapi-volume-99d5da7d-65b3-42e8-b039-378d16d9c5a9" satisfied condition "Succeeded or Failed" Jun 2 23:48:15.972: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-99d5da7d-65b3-42e8-b039-378d16d9c5a9 container client-container: STEP: delete the pod Jun 2 23:48:16.047: INFO: Waiting for pod downwardapi-volume-99d5da7d-65b3-42e8-b039-378d16d9c5a9 to disappear Jun 2 23:48:16.052: INFO: Pod downwardapi-volume-99d5da7d-65b3-42e8-b039-378d16d9c5a9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:48:16.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1794" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":41,"skipped":593,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:48:16.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 2 23:48:16.348: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:48:17.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3717" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":42,"skipped":600,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:48:17.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9693 Jun 2 23:48:21.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9693 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jun 2 23:48:21.818: INFO: stderr: "I0602 23:48:21.665085 322 log.go:172] (0xc000a9f080) (0xc0009d2140) Create stream\nI0602 23:48:21.665334 322 log.go:172] (0xc000a9f080) (0xc0009d2140) Stream added, broadcasting: 1\nI0602 23:48:21.671221 322 log.go:172] (0xc000a9f080) Reply frame received for 1\nI0602 23:48:21.671260 322 log.go:172] (0xc000a9f080) (0xc000837ea0) Create stream\nI0602 23:48:21.671269 322 log.go:172] (0xc000a9f080) (0xc000837ea0) Stream added, broadcasting: 3\nI0602 23:48:21.672354 322 log.go:172] (0xc000a9f080) Reply frame received for 3\nI0602 23:48:21.672396 322 log.go:172] (0xc000a9f080) (0xc0004da140) Create stream\nI0602 23:48:21.672416 322 log.go:172] (0xc000a9f080) (0xc0004da140) Stream added, broadcasting: 5\nI0602 23:48:21.673668 322 log.go:172] (0xc000a9f080) Reply frame received for 5\nI0602 23:48:21.756293 322 log.go:172] (0xc000a9f080) Data frame received for 5\nI0602 23:48:21.756319 322 log.go:172] (0xc0004da140) (5) Data frame handling\nI0602 23:48:21.756336 322 log.go:172] (0xc0004da140) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0602 23:48:21.806911 322 log.go:172] (0xc000a9f080) Data frame received for 3\nI0602 23:48:21.806959 322 log.go:172] (0xc000837ea0) (3) Data frame handling\nI0602 23:48:21.807004 322 log.go:172] (0xc000837ea0) (3) Data frame sent\nI0602 23:48:21.807144 322 log.go:172] (0xc000a9f080) Data frame received for 5\nI0602 23:48:21.807163 322 log.go:172] (0xc0004da140) (5) Data frame handling\nI0602 23:48:21.807398 322 log.go:172] (0xc000a9f080) Data frame received for 3\nI0602 23:48:21.807412 322 log.go:172] (0xc000837ea0) (3) Data frame handling\nI0602 23:48:21.809763 322 log.go:172] (0xc000a9f080) Data frame received for 1\nI0602 23:48:21.809795 322 log.go:172] (0xc0009d2140) (1) Data frame handling\nI0602 23:48:21.809831 322 log.go:172] (0xc0009d2140) (1) Data frame sent\nI0602 23:48:21.809868 322 log.go:172] (0xc000a9f080) (0xc0009d2140) Stream removed, broadcasting: 1\nI0602 23:48:21.809886 322 log.go:172] (0xc000a9f080) Go away received\nI0602 23:48:21.810490 322 log.go:172] (0xc000a9f080) (0xc0009d2140) Stream removed, broadcasting: 1\nI0602 23:48:21.810520 322 log.go:172] (0xc000a9f080) (0xc000837ea0) Stream removed, broadcasting: 3\nI0602 23:48:21.810534 322 log.go:172] (0xc000a9f080) (0xc0004da140) Stream removed, broadcasting: 5\n" Jun 2 23:48:21.818: INFO: stdout: "iptables" Jun 2 23:48:21.818: INFO: proxyMode: iptables Jun 2 23:48:21.824: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 2 23:48:21.843: INFO: Pod kube-proxy-mode-detector still exists Jun 2 23:48:23.843: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 2 23:48:23.847: INFO: Pod kube-proxy-mode-detector still exists Jun 2 23:48:25.843: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 2 23:48:25.847: INFO: Pod kube-proxy-mode-detector still exists Jun 2 23:48:27.843: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 2 23:48:27.847: INFO: Pod kube-proxy-mode-detector still exists Jun 2 23:48:29.843: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 2 23:48:29.848: INFO: Pod kube-proxy-mode-detector still exists Jun 2 23:48:31.843: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 2 23:48:31.847: INFO: Pod kube-proxy-mode-detector still exists Jun 2 23:48:33.843: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 2 23:48:33.848: INFO: Pod kube-proxy-mode-detector still exists Jun 2 23:48:35.843: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 2 23:48:35.847: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-9693 STEP: creating replication controller affinity-clusterip-timeout in namespace services-9693 I0602 23:48:35.889481 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-9693, replica count: 3 I0602 23:48:38.940032 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0602 23:48:41.940253 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 2 23:48:41.947: INFO: Creating new exec pod Jun 2 23:48:46.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9693 execpod-affinity7mxjv -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jun 2 23:48:47.234: INFO: stderr: "I0602 23:48:47.120481 345 log.go:172] (0xc0009fc000) (0xc00051ef00) Create stream\nI0602 23:48:47.120557 345 log.go:172] (0xc0009fc000) (0xc00051ef00) Stream added, broadcasting: 1\nI0602 23:48:47.122894 345 log.go:172] (0xc0009fc000) Reply frame received for 1\nI0602 23:48:47.122953 345 log.go:172] (0xc0009fc000) (0xc0004a81e0) Create stream\nI0602 23:48:47.122970 345 log.go:172] (0xc0009fc000) (0xc0004a81e0) Stream added, broadcasting: 3\nI0602 23:48:47.123960 345 log.go:172] (0xc0009fc000) Reply frame received for 3\nI0602 23:48:47.123998 345 log.go:172] (0xc0009fc000) (0xc000556000) Create stream\nI0602 23:48:47.124012 345 log.go:172] (0xc0009fc000) (0xc000556000) Stream added, broadcasting: 5\nI0602 23:48:47.124911 345 log.go:172] (0xc0009fc000) Reply frame received for 5\nI0602 23:48:47.210613 345 log.go:172] (0xc0009fc000) Data frame received for 5\nI0602 23:48:47.210644 345 log.go:172] (0xc000556000) (5) Data frame handling\nI0602 23:48:47.210665 345 log.go:172] (0xc000556000) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0602 23:48:47.225752 345 log.go:172] (0xc0009fc000) Data frame received for 5\nI0602 23:48:47.225778 345 log.go:172] (0xc000556000) (5) Data frame handling\nI0602 23:48:47.225800 345 log.go:172] (0xc000556000) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0602 23:48:47.226331 345 log.go:172] (0xc0009fc000) Data frame received for 3\nI0602 23:48:47.226348 345 log.go:172] (0xc0004a81e0) (3) Data frame handling\nI0602 23:48:47.226388 345 log.go:172] (0xc0009fc000) Data frame received for 5\nI0602 23:48:47.226419 345 log.go:172] (0xc000556000) (5) Data frame handling\nI0602 23:48:47.228237 345 log.go:172] (0xc0009fc000) Data frame received for 1\nI0602 23:48:47.228252 345 log.go:172] (0xc00051ef00) (1) Data frame handling\nI0602 23:48:47.228265 345 log.go:172] (0xc00051ef00) (1) Data frame sent\nI0602 23:48:47.228273 345 log.go:172] (0xc0009fc000) (0xc00051ef00) Stream removed, broadcasting: 1\nI0602 23:48:47.228345 345 log.go:172] (0xc0009fc000) Go away received\nI0602 23:48:47.228511 345 log.go:172] (0xc0009fc000) (0xc00051ef00) Stream removed, broadcasting: 1\nI0602 23:48:47.228533 345 log.go:172] (0xc0009fc000) (0xc0004a81e0) Stream removed, broadcasting: 3\nI0602 23:48:47.228547 345 log.go:172] (0xc0009fc000) (0xc000556000) Stream removed, broadcasting: 5\n" Jun 2 23:48:47.235: INFO: stdout: "" Jun 2 23:48:47.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9693 execpod-affinity7mxjv -- /bin/sh -x -c nc -zv -t -w 2 10.105.14.164 80' Jun 2 23:48:47.553: INFO: stderr: "I0602 23:48:47.363060 365 log.go:172] (0xc000b0e580) (0xc00034e460) Create stream\nI0602 23:48:47.363109 365 log.go:172] (0xc000b0e580) (0xc00034e460) Stream added, broadcasting: 1\nI0602 23:48:47.365359 365 log.go:172] (0xc000b0e580) Reply frame received for 1\nI0602 23:48:47.365401 365 log.go:172] (0xc000b0e580) (0xc00034fc20) Create stream\nI0602 23:48:47.365411 365 log.go:172] (0xc000b0e580) (0xc00034fc20) Stream added, broadcasting: 3\nI0602 23:48:47.366351 365 log.go:172] (0xc000b0e580) Reply frame received for 3\nI0602 23:48:47.366384 365 log.go:172] (0xc000b0e580) (0xc0003f4280) Create stream\nI0602 23:48:47.366392 365 log.go:172] (0xc000b0e580) (0xc0003f4280) Stream added, broadcasting: 5\nI0602 23:48:47.367223 365 log.go:172] (0xc000b0e580) Reply frame received for 5\nI0602 23:48:47.545060 365 log.go:172] (0xc000b0e580) Data frame received for 3\nI0602 23:48:47.545085 365 log.go:172] (0xc00034fc20) (3) Data frame handling\nI0602 23:48:47.545099 365 log.go:172] (0xc000b0e580) Data frame received for 5\nI0602 23:48:47.545103 365 log.go:172] (0xc0003f4280) (5) Data frame handling\nI0602 23:48:47.545196 365 log.go:172] (0xc0003f4280) (5) Data frame sent\nI0602 23:48:47.545207 365 log.go:172] (0xc000b0e580) Data frame received for 5\nI0602 23:48:47.545211 365 log.go:172] (0xc0003f4280) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.14.164 80\nConnection to 10.105.14.164 80 port [tcp/http] succeeded!\nI0602 23:48:47.546831 365 log.go:172] (0xc000b0e580) Data frame received for 1\nI0602 23:48:47.546868 365 log.go:172] (0xc00034e460) (1) Data frame handling\nI0602 23:48:47.546904 365 log.go:172] (0xc00034e460) (1) Data frame sent\nI0602 23:48:47.546938 365 log.go:172] (0xc000b0e580) (0xc00034e460) Stream removed, broadcasting: 1\nI0602 23:48:47.546966 365 log.go:172] (0xc000b0e580) Go away received\nI0602 23:48:47.547473 365 log.go:172] (0xc000b0e580) (0xc00034e460) Stream removed, broadcasting: 1\nI0602 23:48:47.547498 365 log.go:172] (0xc000b0e580) (0xc00034fc20) Stream removed, broadcasting: 3\nI0602 23:48:47.547511 365 log.go:172] (0xc000b0e580) (0xc0003f4280) Stream removed, broadcasting: 5\n" Jun 2 23:48:47.553: INFO: stdout: "" Jun 2 23:48:47.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9693 execpod-affinity7mxjv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.105.14.164:80/ ; done' Jun 2 23:48:47.860: INFO: stderr: "I0602 23:48:47.683739 386 log.go:172] (0xc00003b970) (0xc00064f400) Create stream\nI0602 23:48:47.683830 386 log.go:172] (0xc00003b970) (0xc00064f400) Stream added, broadcasting: 1\nI0602 23:48:47.686325 386 log.go:172] (0xc00003b970) Reply frame received for 1\nI0602 23:48:47.686374 386 log.go:172] (0xc00003b970) (0xc0005b6d20) Create stream\nI0602 23:48:47.686389 386 log.go:172] (0xc00003b970) (0xc0005b6d20) Stream added, broadcasting: 3\nI0602 23:48:47.687338 386 log.go:172] (0xc00003b970) Reply frame received for 3\nI0602 23:48:47.687372 386 log.go:172] (0xc00003b970) (0xc000598460) Create stream\nI0602 23:48:47.687386 386 log.go:172] (0xc00003b970) (0xc000598460) Stream added, broadcasting: 5\nI0602 23:48:47.688180 386 log.go:172] (0xc00003b970) Reply frame received for 5\nI0602 23:48:47.766742 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.766781 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.766793 386 log.go:172] (0xc000598460) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.14.164:80/\nI0602 23:48:47.766810 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.766817 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.766825 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.772459 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.772484 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.772511 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.773098 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.773316 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.773339 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.773365 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.773380 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.773406 386 log.go:172] (0xc000598460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.14.164:80/\nI0602 23:48:47.779729 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.779752 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.779769 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.780277 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.780310 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.780322 386 log.go:172] (0xc000598460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.14.164:80/\nI0602 23:48:47.780341 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.780349 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.780358 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.787064 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.787093 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.787107 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.787119 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.787129 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.787154 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.787166 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.787177 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.787195 386 log.go:172] (0xc000598460) (5) Data frame sent\nI0602 23:48:47.787207 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.787218 386 log.go:172] (0xc000598460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.14.164:80/\nI0602 23:48:47.787240 386 log.go:172] (0xc000598460) (5) Data frame sent\nI0602 23:48:47.791426 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.791452 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.791470 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.791855 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.791874 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.791884 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.791896 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.791907 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.791916 386 log.go:172] (0xc000598460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.14.164:80/\nI0602 23:48:47.796340 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.796365 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.796401 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.796636 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.796660 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.796667 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.796694 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.796717 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.796737 386 log.go:172] (0xc000598460) (5) Data frame sent\n+ echo\nI0602 23:48:47.796752 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.796797 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.796820 386 log.go:172] (0xc000598460) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.105.14.164:80/\nI0602 23:48:47.801641 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.801654 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.801661 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.801945 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.801955 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.801961 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.801978 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.802009 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.802032 386 log.go:172] (0xc000598460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.14.164:80/\nI0602 23:48:47.806051 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.806072 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.806086 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.806431 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.806463 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.806525 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.806548 386 log.go:172] (0xc000598460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.14.164:80/\nI0602 23:48:47.806572 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.806585 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.810605 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.810623 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.810640 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.811143 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.811162 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.811177 386 log.go:172] (0xc000598460) (5) Data frame sent\n+ echo\nI0602 23:48:47.811186 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.811214 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.811228 386 log.go:172] (0xc000598460) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.105.14.164:80/\nI0602 23:48:47.811250 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.811266 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.811278 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.818156 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.818177 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.818195 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.818758 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.818791 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.818810 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.818829 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.818841 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.818859 386 log.go:172] (0xc000598460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.14.164:80/\nI0602 23:48:47.823152 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.823165 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.823174 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.823896 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.823915 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.823922 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.823929 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.823933 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.823938 386 log.go:172] (0xc000598460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.14.164:80/\nI0602 23:48:47.828382 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.828466 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.828542 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.828736 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.828762 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.828794 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.828824 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.828842 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.828867 386 log.go:172] (0xc000598460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.14.164:80/\nI0602 23:48:47.833389 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.833402 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.833412 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.833724 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.833734 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.833741 386 log.go:172] (0xc000598460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.14.164:80/\nI0602 23:48:47.833753 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.833760 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.833765 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.838046 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.838064 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.838079 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.838465 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.838485 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.838503 386 log.go:172] (0xc000598460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.14.164:80/\nI0602 23:48:47.838626 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.838653 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.838680 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.842762 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.842792 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.842812 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.843169 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.843205 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.843238 386 log.go:172] (0xc000598460) (5) Data frame sent\nI0602 23:48:47.843263 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.843273 386 log.go:172] (0xc000598460) (5) Data frame handling\n+ echo\nI0602 23:48:47.843292 386 log.go:172] (0xc00003b970) Data frame received for 3\n+ curl -q -s --connect-timeout 2 http://10.105.14.164:80/\nI0602 23:48:47.843312 386 log.go:172] (0xc000598460) (5) Data frame sent\nI0602 23:48:47.843336 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.843356 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.848783 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.848798 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.848813 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.849328 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.849347 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.849372 386 log.go:172] (0xc000598460) (5) Data frame sent\n+ echo\n+ curl -qI0602 23:48:47.849438 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.849470 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.849485 386 log.go:172] (0xc000598460) (5) Data frame sent\n -s --connect-timeout 2 http://10.105.14.164:80/\nI0602 23:48:47.849499 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.849508 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.849517 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.852854 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.852872 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.852894 386 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0602 23:48:47.853801 386 log.go:172] (0xc00003b970) Data frame received for 5\nI0602 23:48:47.853817 386 log.go:172] (0xc000598460) (5) Data frame handling\nI0602 23:48:47.854039 386 log.go:172] (0xc00003b970) Data frame received for 3\nI0602 23:48:47.854053 386 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0602 23:48:47.855611 386 log.go:172] (0xc00003b970) Data frame received for 1\nI0602 23:48:47.855629 386 log.go:172] (0xc00064f400) (1) Data frame handling\nI0602 23:48:47.855647 386 log.go:172] (0xc00064f400) (1) Data frame sent\nI0602 23:48:47.855670 386 log.go:172] (0xc00003b970) (0xc00064f400) Stream removed, broadcasting: 1\nI0602 23:48:47.855899 386 log.go:172] (0xc00003b970) Go away received\nI0602 23:48:47.855927 386 log.go:172] (0xc00003b970) (0xc00064f400) Stream removed, broadcasting: 1\nI0602 23:48:47.855940 386 log.go:172] (0xc00003b970) (0xc0005b6d20) Stream removed, broadcasting: 3\nI0602 23:48:47.855950 386 log.go:172] (0xc00003b970) (0xc000598460) Stream removed, broadcasting: 5\n" Jun 2 23:48:47.860: INFO: stdout: "\naffinity-clusterip-timeout-9hm5c\naffinity-clusterip-timeout-9hm5c\naffinity-clusterip-timeout-9hm5c\naffinity-clusterip-timeout-9hm5c\naffinity-clusterip-timeout-9hm5c\naffinity-clusterip-timeout-9hm5c\naffinity-clusterip-timeout-9hm5c\naffinity-clusterip-timeout-9hm5c\naffinity-clusterip-timeout-9hm5c\naffinity-clusterip-timeout-9hm5c\naffinity-clusterip-timeout-9hm5c\naffinity-clusterip-timeout-9hm5c\naffinity-clusterip-timeout-9hm5c\naffinity-clusterip-timeout-9hm5c\naffinity-clusterip-timeout-9hm5c\naffinity-clusterip-timeout-9hm5c" Jun 2 23:48:47.860: INFO: Received response from host: Jun 2 23:48:47.861: INFO: Received response from host: affinity-clusterip-timeout-9hm5c Jun 2 23:48:47.861: INFO: Received response from host: affinity-clusterip-timeout-9hm5c Jun 2 23:48:47.861: INFO: Received response from host: affinity-clusterip-timeout-9hm5c Jun 2 23:48:47.861: INFO: Received response from host: affinity-clusterip-timeout-9hm5c Jun 2 23:48:47.861: INFO: Received response from host: affinity-clusterip-timeout-9hm5c Jun 2 23:48:47.861: INFO: Received response from host: affinity-clusterip-timeout-9hm5c Jun 2 23:48:47.861: INFO: Received response from host: affinity-clusterip-timeout-9hm5c Jun 2 23:48:47.861: INFO: Received response from host: affinity-clusterip-timeout-9hm5c Jun 2 23:48:47.861: INFO: Received response from host: affinity-clusterip-timeout-9hm5c Jun 2 23:48:47.861: INFO: Received response from host: affinity-clusterip-timeout-9hm5c Jun 2 23:48:47.861: INFO: Received response from host: affinity-clusterip-timeout-9hm5c Jun 2 23:48:47.861: INFO: Received response from host: affinity-clusterip-timeout-9hm5c Jun 2 23:48:47.861: INFO: Received response from host: affinity-clusterip-timeout-9hm5c Jun 2 23:48:47.861: INFO: Received response from host: affinity-clusterip-timeout-9hm5c Jun 2 23:48:47.861: INFO: Received response from host: affinity-clusterip-timeout-9hm5c Jun 2 23:48:47.861: INFO: Received response from host: affinity-clusterip-timeout-9hm5c Jun 2 23:48:47.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9693 execpod-affinity7mxjv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.105.14.164:80/' Jun 2 23:48:48.071: INFO: stderr: "I0602 23:48:47.982571 406 log.go:172] (0xc00098b550) (0xc000a24780) Create stream\nI0602 23:48:47.982629 406 log.go:172] (0xc00098b550) (0xc000a24780) Stream added, broadcasting: 1\nI0602 23:48:47.987901 406 log.go:172] (0xc00098b550) Reply frame received for 1\nI0602 23:48:47.987971 406 log.go:172] (0xc00098b550) (0xc000516dc0) Create stream\nI0602 23:48:47.987997 406 log.go:172] (0xc00098b550) (0xc000516dc0) Stream added, broadcasting: 3\nI0602 23:48:47.989098 406 log.go:172] (0xc00098b550) Reply frame received for 3\nI0602 23:48:47.989311 406 log.go:172] (0xc00098b550) (0xc0000c8dc0) Create stream\nI0602 23:48:47.989331 406 log.go:172] (0xc00098b550) (0xc0000c8dc0) Stream added, broadcasting: 5\nI0602 23:48:47.990264 406 log.go:172] (0xc00098b550) Reply frame received for 5\nI0602 23:48:48.058005 406 log.go:172] (0xc00098b550) Data frame received for 5\nI0602 23:48:48.058047 406 log.go:172] (0xc0000c8dc0) (5) Data frame handling\nI0602 23:48:48.058066 406 log.go:172] (0xc0000c8dc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.105.14.164:80/\nI0602 23:48:48.061815 406 log.go:172] (0xc00098b550) Data frame received for 3\nI0602 23:48:48.061836 406 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0602 23:48:48.061856 406 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0602 23:48:48.062528 406 log.go:172] (0xc00098b550) Data frame received for 3\nI0602 23:48:48.062545 406 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0602 23:48:48.062566 406 log.go:172] (0xc00098b550) Data frame received for 5\nI0602 23:48:48.062577 406 log.go:172] (0xc0000c8dc0) (5) Data frame handling\nI0602 23:48:48.064206 406 log.go:172] (0xc00098b550) Data frame received for 1\nI0602 23:48:48.064229 406 log.go:172] (0xc000a24780) (1) Data frame handling\nI0602 23:48:48.064241 406 log.go:172] (0xc000a24780) (1) Data frame sent\nI0602 23:48:48.064265 406 log.go:172] (0xc00098b550) (0xc000a24780) Stream removed, broadcasting: 1\nI0602 23:48:48.064336 406 log.go:172] (0xc00098b550) Go away received\nI0602 23:48:48.064577 406 log.go:172] (0xc00098b550) (0xc000a24780) Stream removed, broadcasting: 1\nI0602 23:48:48.064594 406 log.go:172] (0xc00098b550) (0xc000516dc0) Stream removed, broadcasting: 3\nI0602 23:48:48.064604 406 log.go:172] (0xc00098b550) (0xc0000c8dc0) Stream removed, broadcasting: 5\n" Jun 2 23:48:48.071: INFO: stdout: "affinity-clusterip-timeout-9hm5c" Jun 2 23:49:03.071: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9693 execpod-affinity7mxjv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.105.14.164:80/' Jun 2 23:49:03.323: INFO: stderr: "I0602 23:49:03.210074 428 log.go:172] (0xc00093cbb0) (0xc0009d6320) Create stream\nI0602 23:49:03.210126 428 log.go:172] (0xc00093cbb0) (0xc0009d6320) Stream added, broadcasting: 1\nI0602 23:49:03.219580 428 log.go:172] (0xc00093cbb0) Reply frame received for 1\nI0602 23:49:03.219706 428 log.go:172] (0xc00093cbb0) (0xc0008440a0) Create stream\nI0602 23:49:03.219769 428 log.go:172] (0xc00093cbb0) (0xc0008440a0) Stream added, broadcasting: 3\nI0602 23:49:03.222056 428 log.go:172] (0xc00093cbb0) Reply frame received for 3\nI0602 23:49:03.222180 428 log.go:172] (0xc00093cbb0) (0xc0007226e0) Create stream\nI0602 23:49:03.222237 428 log.go:172] (0xc00093cbb0) (0xc0007226e0) Stream added, broadcasting: 5\nI0602 23:49:03.226718 428 log.go:172] (0xc00093cbb0) Reply frame received for 5\nI0602 23:49:03.314382 428 log.go:172] (0xc00093cbb0) Data frame received for 5\nI0602 23:49:03.314403 428 log.go:172] (0xc0007226e0) (5) Data frame handling\nI0602 23:49:03.314419 428 log.go:172] (0xc0007226e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.105.14.164:80/\nI0602 23:49:03.316520 428 log.go:172] (0xc00093cbb0) Data frame received for 3\nI0602 23:49:03.316622 428 log.go:172] (0xc0008440a0) (3) Data frame handling\nI0602 23:49:03.316650 428 log.go:172] (0xc0008440a0) (3) Data frame sent\nI0602 23:49:03.316890 428 log.go:172] (0xc00093cbb0) Data frame received for 3\nI0602 23:49:03.316907 428 log.go:172] (0xc0008440a0) (3) Data frame handling\nI0602 23:49:03.316930 428 log.go:172] (0xc00093cbb0) Data frame received for 5\nI0602 23:49:03.316948 428 log.go:172] (0xc0007226e0) (5) Data frame handling\nI0602 23:49:03.319010 428 log.go:172] (0xc00093cbb0) Data frame received for 1\nI0602 23:49:03.319035 428 log.go:172] (0xc0009d6320) (1) Data frame handling\nI0602 23:49:03.319054 428 log.go:172] (0xc0009d6320) (1) Data frame sent\nI0602 23:49:03.319069 428 log.go:172] (0xc00093cbb0) (0xc0009d6320) Stream removed, broadcasting: 1\nI0602 23:49:03.319085 428 log.go:172] (0xc00093cbb0) Go away received\nI0602 23:49:03.319452 428 log.go:172] (0xc00093cbb0) (0xc0009d6320) Stream removed, broadcasting: 1\nI0602 23:49:03.319469 428 log.go:172] (0xc00093cbb0) (0xc0008440a0) Stream removed, broadcasting: 3\nI0602 23:49:03.319478 428 log.go:172] (0xc00093cbb0) (0xc0007226e0) Stream removed, broadcasting: 5\n" Jun 2 23:49:03.323: INFO: stdout: "affinity-clusterip-timeout-ww6mw" Jun 2 23:49:03.323: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-9693, will wait for the garbage collector to delete the pods Jun 2 23:49:03.434: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 5.19258ms Jun 2 23:49:03.935: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.300782ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:49:15.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9693" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:58.052 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":43,"skipped":650,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:49:15.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 2 23:49:16.159: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jun 2 23:49:18.234: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738556, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738556, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738556, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738556, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 2 23:49:21.272: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 2 23:49:21.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:49:22.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8597" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.231 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":44,"skipped":660,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:49:22.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jun 2 23:49:22.772: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:49:30.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7167" for this suite. • [SLOW TEST:7.560 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":45,"skipped":676,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:49:30.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0602 23:49:40.421627 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 2 23:49:40.421: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:49:40.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8379" for this suite. • [SLOW TEST:10.211 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":46,"skipped":698,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:49:40.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 2 23:49:40.495: INFO: Waiting up to 5m0s for pod "downwardapi-volume-576f0b46-deb8-4584-bd32-59fd6a6584ea" in namespace "projected-5566" to be "Succeeded or Failed" Jun 2 23:49:40.523: INFO: Pod "downwardapi-volume-576f0b46-deb8-4584-bd32-59fd6a6584ea": Phase="Pending", Reason="", readiness=false. Elapsed: 27.494472ms Jun 2 23:49:42.528: INFO: Pod "downwardapi-volume-576f0b46-deb8-4584-bd32-59fd6a6584ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032667952s Jun 2 23:49:44.532: INFO: Pod "downwardapi-volume-576f0b46-deb8-4584-bd32-59fd6a6584ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037249855s STEP: Saw pod success Jun 2 23:49:44.532: INFO: Pod "downwardapi-volume-576f0b46-deb8-4584-bd32-59fd6a6584ea" satisfied condition "Succeeded or Failed" Jun 2 23:49:44.536: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-576f0b46-deb8-4584-bd32-59fd6a6584ea container client-container: STEP: delete the pod Jun 2 23:49:44.592: INFO: Waiting for pod downwardapi-volume-576f0b46-deb8-4584-bd32-59fd6a6584ea to disappear Jun 2 23:49:44.608: INFO: Pod downwardapi-volume-576f0b46-deb8-4584-bd32-59fd6a6584ea no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:49:44.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5566" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":47,"skipped":700,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:49:44.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 2 23:49:44.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3881' Jun 2 23:49:44.870: INFO: stderr: "" Jun 2 23:49:44.870: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jun 2 23:49:49.921: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3881 -o json' Jun 2 23:49:50.030: INFO: stderr: "" Jun 2 23:49:50.030: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-06-02T23:49:44Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-06-02T23:49:44Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.66\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-06-02T23:49:48Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3881\",\n \"resourceVersion\": \"9795418\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3881/pods/e2e-test-httpd-pod\",\n \"uid\": \"f13134bc-3c14-4e75-9bfd-87b5ff49bc59\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-szdsj\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-szdsj\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-szdsj\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-02T23:49:44Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-02T23:49:48Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-02T23:49:48Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-02T23:49:44Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://9baea8b5d5f5f8e50fdef0f299211ba060303ef7562f553588af475e6286d216\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-06-02T23:49:47Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.66\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.66\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-06-02T23:49:44Z\"\n }\n}\n" STEP: replace the image in the pod Jun 2 23:49:50.030: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3881' Jun 2 23:49:50.866: INFO: stderr: "" Jun 2 23:49:50.866: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 Jun 2 23:49:50.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3881' Jun 2 23:50:04.850: INFO: stderr: "" Jun 2 23:50:04.850: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:50:04.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3881" for this suite. • [SLOW TEST:20.241 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":48,"skipped":753,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:50:04.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Jun 2 23:50:04.983: INFO: Waiting up to 5m0s for pod "client-containers-210f2dda-42a9-4ce6-a45b-6c5a3ce3580d" in namespace "containers-3890" to be "Succeeded or Failed" Jun 2 23:50:04.993: INFO: Pod "client-containers-210f2dda-42a9-4ce6-a45b-6c5a3ce3580d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.033358ms Jun 2 23:50:07.038: INFO: Pod "client-containers-210f2dda-42a9-4ce6-a45b-6c5a3ce3580d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054359284s Jun 2 23:50:09.042: INFO: Pod "client-containers-210f2dda-42a9-4ce6-a45b-6c5a3ce3580d": Phase="Running", Reason="", readiness=true. Elapsed: 4.058276223s Jun 2 23:50:11.047: INFO: Pod "client-containers-210f2dda-42a9-4ce6-a45b-6c5a3ce3580d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063264558s STEP: Saw pod success Jun 2 23:50:11.047: INFO: Pod "client-containers-210f2dda-42a9-4ce6-a45b-6c5a3ce3580d" satisfied condition "Succeeded or Failed" Jun 2 23:50:11.050: INFO: Trying to get logs from node latest-worker2 pod client-containers-210f2dda-42a9-4ce6-a45b-6c5a3ce3580d container test-container: STEP: delete the pod Jun 2 23:50:11.095: INFO: Waiting for pod client-containers-210f2dda-42a9-4ce6-a45b-6c5a3ce3580d to disappear Jun 2 23:50:11.100: INFO: Pod client-containers-210f2dda-42a9-4ce6-a45b-6c5a3ce3580d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:50:11.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3890" for this suite. • [SLOW TEST:6.254 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":49,"skipped":755,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:50:11.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jun 2 23:50:18.936: INFO: 8 pods remaining Jun 2 23:50:18.936: INFO: 0 pods has nil DeletionTimestamp Jun 2 23:50:18.936: INFO: Jun 2 23:50:20.704: INFO: 0 pods remaining Jun 2 23:50:20.704: INFO: 0 pods has nil DeletionTimestamp Jun 2 23:50:20.704: INFO: STEP: Gathering metrics W0602 23:50:21.283137 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 2 23:50:21.283: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:50:21.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-963" for this suite. • [SLOW TEST:10.197 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":50,"skipped":756,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:50:21.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jun 2 23:50:28.790: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-210 PodName:pod-sharedvolume-56047632-022c-4115-acd9-8d08d0804d47 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 23:50:28.790: INFO: >>> kubeConfig: /root/.kube/config I0602 23:50:28.826850 7 log.go:172] (0xc002c658c0) (0xc001d11040) Create stream I0602 23:50:28.826921 7 log.go:172] (0xc002c658c0) (0xc001d11040) Stream added, broadcasting: 1 I0602 23:50:28.828980 7 log.go:172] (0xc002c658c0) Reply frame received for 1 I0602 23:50:28.829031 7 log.go:172] (0xc002c658c0) (0xc00201c000) Create stream I0602 23:50:28.829047 7 log.go:172] (0xc002c658c0) (0xc00201c000) Stream added, broadcasting: 3 I0602 23:50:28.830280 7 log.go:172] (0xc002c658c0) Reply frame received for 3 I0602 23:50:28.830313 7 log.go:172] (0xc002c658c0) (0xc001d110e0) Create stream I0602 23:50:28.830335 7 log.go:172] (0xc002c658c0) (0xc001d110e0) Stream added, broadcasting: 5 I0602 23:50:28.831538 7 log.go:172] (0xc002c658c0) Reply frame received for 5 I0602 23:50:28.904055 7 log.go:172] (0xc002c658c0) Data frame received for 5 I0602 23:50:28.904129 7 log.go:172] (0xc002c658c0) Data frame received for 1 I0602 23:50:28.904189 7 log.go:172] (0xc001d11040) (1) Data frame handling I0602 23:50:28.904264 7 log.go:172] (0xc001d11040) (1) Data frame sent I0602 23:50:28.904330 7 log.go:172] (0xc002c658c0) (0xc001d11040) Stream removed, broadcasting: 1 I0602 23:50:28.904417 7 log.go:172] (0xc001d110e0) (5) Data frame handling I0602 23:50:28.904459 7 log.go:172] (0xc002c658c0) Data frame received for 3 I0602 23:50:28.904480 7 log.go:172] (0xc00201c000) (3) Data frame handling I0602 23:50:28.904496 7 log.go:172] (0xc00201c000) (3) Data frame sent I0602 23:50:28.904508 7 log.go:172] (0xc002c658c0) Data frame received for 3 I0602 23:50:28.904517 7 log.go:172] (0xc00201c000) (3) Data frame handling I0602 23:50:28.904545 7 log.go:172] (0xc002c658c0) Go away received I0602 23:50:28.904711 7 log.go:172] (0xc002c658c0) (0xc001d11040) Stream removed, broadcasting: 1 I0602 23:50:28.904731 7 log.go:172] (0xc002c658c0) (0xc00201c000) Stream removed, broadcasting: 3 I0602 23:50:28.904748 7 log.go:172] (0xc002c658c0) (0xc001d110e0) Stream removed, broadcasting: 5 Jun 2 23:50:28.904: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:50:28.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-210" for this suite. • [SLOW TEST:7.628 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":51,"skipped":757,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:50:28.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-x2pv STEP: Creating a pod to test atomic-volume-subpath Jun 2 23:50:29.026: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-x2pv" in namespace "subpath-6950" to be "Succeeded or Failed" Jun 2 23:50:29.098: INFO: Pod "pod-subpath-test-configmap-x2pv": Phase="Pending", Reason="", readiness=false. Elapsed: 72.516711ms Jun 2 23:50:31.103: INFO: Pod "pod-subpath-test-configmap-x2pv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076872632s Jun 2 23:50:33.107: INFO: Pod "pod-subpath-test-configmap-x2pv": Phase="Running", Reason="", readiness=true. Elapsed: 4.08167564s Jun 2 23:50:35.112: INFO: Pod "pod-subpath-test-configmap-x2pv": Phase="Running", Reason="", readiness=true. Elapsed: 6.085979727s Jun 2 23:50:37.116: INFO: Pod "pod-subpath-test-configmap-x2pv": Phase="Running", Reason="", readiness=true. Elapsed: 8.090686773s Jun 2 23:50:39.120: INFO: Pod "pod-subpath-test-configmap-x2pv": Phase="Running", Reason="", readiness=true. Elapsed: 10.094688828s Jun 2 23:50:41.127: INFO: Pod "pod-subpath-test-configmap-x2pv": Phase="Running", Reason="", readiness=true. Elapsed: 12.10157056s Jun 2 23:50:43.132: INFO: Pod "pod-subpath-test-configmap-x2pv": Phase="Running", Reason="", readiness=true. Elapsed: 14.106419765s Jun 2 23:50:45.137: INFO: Pod "pod-subpath-test-configmap-x2pv": Phase="Running", Reason="", readiness=true. Elapsed: 16.110749873s Jun 2 23:50:47.142: INFO: Pod "pod-subpath-test-configmap-x2pv": Phase="Running", Reason="", readiness=true. Elapsed: 18.116147822s Jun 2 23:50:49.146: INFO: Pod "pod-subpath-test-configmap-x2pv": Phase="Running", Reason="", readiness=true. Elapsed: 20.120427343s Jun 2 23:50:51.151: INFO: Pod "pod-subpath-test-configmap-x2pv": Phase="Running", Reason="", readiness=true. Elapsed: 22.124770321s Jun 2 23:50:53.155: INFO: Pod "pod-subpath-test-configmap-x2pv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.129365333s STEP: Saw pod success Jun 2 23:50:53.155: INFO: Pod "pod-subpath-test-configmap-x2pv" satisfied condition "Succeeded or Failed" Jun 2 23:50:53.159: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-x2pv container test-container-subpath-configmap-x2pv: STEP: delete the pod Jun 2 23:50:53.196: INFO: Waiting for pod pod-subpath-test-configmap-x2pv to disappear Jun 2 23:50:53.208: INFO: Pod pod-subpath-test-configmap-x2pv no longer exists STEP: Deleting pod pod-subpath-test-configmap-x2pv Jun 2 23:50:53.208: INFO: Deleting pod "pod-subpath-test-configmap-x2pv" in namespace "subpath-6950" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:50:53.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6950" for this suite. • [SLOW TEST:24.284 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":52,"skipped":794,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:50:53.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jun 2 23:50:53.326: INFO: Waiting up to 5m0s for pod "downward-api-9c84643c-1bb4-4fec-9f16-3ae795e0748c" in namespace "downward-api-9045" to be "Succeeded or Failed" Jun 2 23:50:53.335: INFO: Pod "downward-api-9c84643c-1bb4-4fec-9f16-3ae795e0748c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.27635ms Jun 2 23:50:55.340: INFO: Pod "downward-api-9c84643c-1bb4-4fec-9f16-3ae795e0748c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014026971s Jun 2 23:50:57.344: INFO: Pod "downward-api-9c84643c-1bb4-4fec-9f16-3ae795e0748c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018377518s STEP: Saw pod success Jun 2 23:50:57.344: INFO: Pod "downward-api-9c84643c-1bb4-4fec-9f16-3ae795e0748c" satisfied condition "Succeeded or Failed" Jun 2 23:50:57.347: INFO: Trying to get logs from node latest-worker2 pod downward-api-9c84643c-1bb4-4fec-9f16-3ae795e0748c container dapi-container: STEP: delete the pod Jun 2 23:50:57.367: INFO: Waiting for pod downward-api-9c84643c-1bb4-4fec-9f16-3ae795e0748c to disappear Jun 2 23:50:57.373: INFO: Pod downward-api-9c84643c-1bb4-4fec-9f16-3ae795e0748c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:50:57.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9045" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":53,"skipped":824,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:50:57.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:51:08.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5526" for this suite. • [SLOW TEST:11.276 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":54,"skipped":849,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:51:08.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 2 23:51:08.965: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 2 23:51:13.969: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:51:15.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5536" for this suite. • [SLOW TEST:6.514 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":55,"skipped":855,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:51:15.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 2 23:51:15.317: INFO: Waiting up to 5m0s for pod "downwardapi-volume-880df55d-a167-4753-89fb-0ec4d536a40b" in namespace "downward-api-9808" to be "Succeeded or Failed" Jun 2 23:51:15.329: INFO: Pod "downwardapi-volume-880df55d-a167-4753-89fb-0ec4d536a40b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.476102ms Jun 2 23:51:17.470: INFO: Pod "downwardapi-volume-880df55d-a167-4753-89fb-0ec4d536a40b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153360178s Jun 2 23:51:19.474: INFO: Pod "downwardapi-volume-880df55d-a167-4753-89fb-0ec4d536a40b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156988247s STEP: Saw pod success Jun 2 23:51:19.474: INFO: Pod "downwardapi-volume-880df55d-a167-4753-89fb-0ec4d536a40b" satisfied condition "Succeeded or Failed" Jun 2 23:51:19.476: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-880df55d-a167-4753-89fb-0ec4d536a40b container client-container: STEP: delete the pod Jun 2 23:51:19.539: INFO: Waiting for pod downwardapi-volume-880df55d-a167-4753-89fb-0ec4d536a40b to disappear Jun 2 23:51:19.558: INFO: Pod downwardapi-volume-880df55d-a167-4753-89fb-0ec4d536a40b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:51:19.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9808" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":56,"skipped":914,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:51:19.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 2 23:51:19.661: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 2 23:51:24.686: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 2 23:51:24.686: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 2 23:51:26.690: INFO: Creating deployment "test-rollover-deployment" Jun 2 23:51:26.721: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 2 23:51:28.728: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 2 23:51:28.736: INFO: Ensure that both replica sets have 1 created replica Jun 2 23:51:28.740: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 2 23:51:28.747: INFO: Updating deployment test-rollover-deployment Jun 2 23:51:28.747: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 2 23:51:30.772: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 2 23:51:30.778: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 2 23:51:30.783: INFO: all replica sets need to contain the pod-template-hash label Jun 2 23:51:30.783: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738686, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738686, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738688, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738686, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 2 23:51:32.791: INFO: all replica sets need to contain the pod-template-hash label Jun 2 23:51:32.791: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738686, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738686, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738692, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738686, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 2 23:51:34.790: INFO: all replica sets need to contain the pod-template-hash label Jun 2 23:51:34.791: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738686, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738686, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738692, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738686, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 2 23:51:36.791: INFO: all replica sets need to contain the pod-template-hash label Jun 2 23:51:36.791: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738686, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738686, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738692, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738686, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 2 23:51:38.793: INFO: all replica sets need to contain the pod-template-hash label Jun 2 23:51:38.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738686, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738686, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738692, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738686, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 2 23:51:40.792: INFO: all replica sets need to contain the pod-template-hash label Jun 2 23:51:40.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738686, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738686, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738692, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738686, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 2 23:51:42.845: INFO: Jun 2 23:51:42.845: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jun 2 23:51:42.852: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6803 /apis/apps/v1/namespaces/deployment-6803/deployments/test-rollover-deployment 18351b47-0627-418a-9b7c-c19a7db7078c 9796191 2 2020-06-02 23:51:26 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-06-02 23:51:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-02 23:51:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00332b778 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-06-02 23:51:26 +0000 UTC,LastTransitionTime:2020-06-02 23:51:26 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-06-02 23:51:42 +0000 UTC,LastTransitionTime:2020-06-02 23:51:26 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 2 23:51:42.856: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-6803 /apis/apps/v1/namespaces/deployment-6803/replicasets/test-rollover-deployment-7c4fd9c879 6a9ab4ba-011c-4533-9e06-6f900d9564d0 9796180 2 2020-06-02 23:51:28 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 18351b47-0627-418a-9b7c-c19a7db7078c 0xc003313ae7 0xc003313ae8}] [] [{kube-controller-manager Update apps/v1 2020-06-02 23:51:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18351b47-0627-418a-9b7c-c19a7db7078c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003313b78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 2 23:51:42.856: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 2 23:51:42.856: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6803 /apis/apps/v1/namespaces/deployment-6803/replicasets/test-rollover-controller 60ad4aad-9deb-480b-8b85-0d06b682717a 9796190 2 2020-06-02 23:51:19 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 18351b47-0627-418a-9b7c-c19a7db7078c 0xc0033138bf 0xc0033138d0}] [] [{e2e.test Update apps/v1 2020-06-02 23:51:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-02 23:51:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18351b47-0627-418a-9b7c-c19a7db7078c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003313978 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 2 23:51:42.856: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-6803 /apis/apps/v1/namespaces/deployment-6803/replicasets/test-rollover-deployment-5686c4cfd5 c33df5e1-5ff6-49ba-9277-ed47f098fe84 9796128 2 2020-06-02 23:51:26 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 18351b47-0627-418a-9b7c-c19a7db7078c 0xc0033139e7 0xc0033139e8}] [] [{kube-controller-manager Update apps/v1 2020-06-02 23:51:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18351b47-0627-418a-9b7c-c19a7db7078c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003313a78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 2 23:51:42.891: INFO: Pod "test-rollover-deployment-7c4fd9c879-wnhdl" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-wnhdl test-rollover-deployment-7c4fd9c879- deployment-6803 /api/v1/namespaces/deployment-6803/pods/test-rollover-deployment-7c4fd9c879-wnhdl 6776473f-69cf-4252-8891-cc2c1396a265 9796147 0 2020-06-02 23:51:28 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 6a9ab4ba-011c-4533-9e06-6f900d9564d0 0xc003682127 0xc003682128}] [] [{kube-controller-manager Update v1 2020-06-02 23:51:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6a9ab4ba-011c-4533-9e06-6f900d9564d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-02 23:51:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.45\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qmdz9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qmdz9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qmdz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:51:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:51:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:51:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:51:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.45,StartTime:2020-06-02 23:51:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-02 23:51:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://a5051ec57445a32681aa7417e882171b342362cfc7b942c090ed50c50305e20d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.45,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:51:42.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6803" for this suite. • [SLOW TEST:23.338 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":57,"skipped":916,"failed":0} SSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:51:42.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:51:43.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-748" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":58,"skipped":920,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:51:43.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 2 23:51:44.128: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 2 23:51:46.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738704, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738704, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738704, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726738704, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 2 23:51:49.364: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:51:49.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1034" for this suite. STEP: Destroying namespace "webhook-1034-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.253 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":59,"skipped":923,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:51:49.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 2 23:51:49.742: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:51:51.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3228" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":60,"skipped":934,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:51:51.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-605b8dfa-dc24-4e90-a5c1-57b511526cd9 STEP: Creating a pod to test consume secrets Jun 2 23:51:51.261: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-704ce750-12bc-4f6b-9979-40d88938b2d8" in namespace "projected-1316" to be "Succeeded or Failed" Jun 2 23:51:51.264: INFO: Pod "pod-projected-secrets-704ce750-12bc-4f6b-9979-40d88938b2d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.589619ms Jun 2 23:51:53.267: INFO: Pod "pod-projected-secrets-704ce750-12bc-4f6b-9979-40d88938b2d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006140989s Jun 2 23:51:55.271: INFO: Pod "pod-projected-secrets-704ce750-12bc-4f6b-9979-40d88938b2d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009799091s STEP: Saw pod success Jun 2 23:51:55.271: INFO: Pod "pod-projected-secrets-704ce750-12bc-4f6b-9979-40d88938b2d8" satisfied condition "Succeeded or Failed" Jun 2 23:51:55.273: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-704ce750-12bc-4f6b-9979-40d88938b2d8 container projected-secret-volume-test: STEP: delete the pod Jun 2 23:51:55.301: INFO: Waiting for pod pod-projected-secrets-704ce750-12bc-4f6b-9979-40d88938b2d8 to disappear Jun 2 23:51:55.303: INFO: Pod pod-projected-secrets-704ce750-12bc-4f6b-9979-40d88938b2d8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:51:55.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1316" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":61,"skipped":937,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:51:55.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0602 23:52:07.346349 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 2 23:52:07.346: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:52:07.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3463" for this suite. • [SLOW TEST:12.369 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":62,"skipped":944,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:52:07.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 2 23:52:07.774: INFO: Waiting up to 5m0s for pod "pod-9581769a-6108-4365-9ccd-5de64b59d2c1" in namespace "emptydir-4713" to be "Succeeded or Failed" Jun 2 23:52:07.834: INFO: Pod "pod-9581769a-6108-4365-9ccd-5de64b59d2c1": Phase="Pending", Reason="", readiness=false. Elapsed: 59.761699ms Jun 2 23:52:09.839: INFO: Pod "pod-9581769a-6108-4365-9ccd-5de64b59d2c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0641526s Jun 2 23:52:11.843: INFO: Pod "pod-9581769a-6108-4365-9ccd-5de64b59d2c1": Phase="Running", Reason="", readiness=true. Elapsed: 4.068421964s Jun 2 23:52:13.922: INFO: Pod "pod-9581769a-6108-4365-9ccd-5de64b59d2c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.147720134s STEP: Saw pod success Jun 2 23:52:13.922: INFO: Pod "pod-9581769a-6108-4365-9ccd-5de64b59d2c1" satisfied condition "Succeeded or Failed" Jun 2 23:52:13.926: INFO: Trying to get logs from node latest-worker pod pod-9581769a-6108-4365-9ccd-5de64b59d2c1 container test-container: STEP: delete the pod Jun 2 23:52:14.013: INFO: Waiting for pod pod-9581769a-6108-4365-9ccd-5de64b59d2c1 to disappear Jun 2 23:52:14.035: INFO: Pod pod-9581769a-6108-4365-9ccd-5de64b59d2c1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:52:14.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4713" for this suite. • [SLOW TEST:6.362 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":63,"skipped":952,"failed":0} SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:52:14.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:52:46.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7930" for this suite. STEP: Destroying namespace "nsdeletetest-1484" for this suite. Jun 2 23:52:46.141: INFO: Namespace nsdeletetest-1484 was already deleted STEP: Destroying namespace "nsdeletetest-5806" for this suite. • [SLOW TEST:32.101 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":64,"skipped":956,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:52:46.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jun 2 23:52:46.281: INFO: Waiting up to 5m0s for pod "downward-api-3559a10e-a645-49f9-9732-9ec582ae0f6a" in namespace "downward-api-5174" to be "Succeeded or Failed" Jun 2 23:52:46.287: INFO: Pod "downward-api-3559a10e-a645-49f9-9732-9ec582ae0f6a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.920351ms Jun 2 23:52:48.292: INFO: Pod "downward-api-3559a10e-a645-49f9-9732-9ec582ae0f6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010152075s Jun 2 23:52:50.296: INFO: Pod "downward-api-3559a10e-a645-49f9-9732-9ec582ae0f6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014409619s STEP: Saw pod success Jun 2 23:52:50.296: INFO: Pod "downward-api-3559a10e-a645-49f9-9732-9ec582ae0f6a" satisfied condition "Succeeded or Failed" Jun 2 23:52:50.299: INFO: Trying to get logs from node latest-worker pod downward-api-3559a10e-a645-49f9-9732-9ec582ae0f6a container dapi-container: STEP: delete the pod Jun 2 23:52:50.340: INFO: Waiting for pod downward-api-3559a10e-a645-49f9-9732-9ec582ae0f6a to disappear Jun 2 23:52:50.350: INFO: Pod downward-api-3559a10e-a645-49f9-9732-9ec582ae0f6a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:52:50.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5174" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":65,"skipped":1065,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:52:50.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-8f29c213-4c0c-4dd4-9b1b-57630b19713b STEP: Creating a pod to test consume secrets Jun 2 23:52:50.511: INFO: Waiting up to 5m0s for pod "pod-secrets-1eb59513-5e50-4118-83d6-2b2e90234cfd" in namespace "secrets-4232" to be "Succeeded or Failed" Jun 2 23:52:50.514: INFO: Pod "pod-secrets-1eb59513-5e50-4118-83d6-2b2e90234cfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.537542ms Jun 2 23:52:52.518: INFO: Pod "pod-secrets-1eb59513-5e50-4118-83d6-2b2e90234cfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006628367s Jun 2 23:52:54.522: INFO: Pod "pod-secrets-1eb59513-5e50-4118-83d6-2b2e90234cfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011208358s STEP: Saw pod success Jun 2 23:52:54.523: INFO: Pod "pod-secrets-1eb59513-5e50-4118-83d6-2b2e90234cfd" satisfied condition "Succeeded or Failed" Jun 2 23:52:54.525: INFO: Trying to get logs from node latest-worker pod pod-secrets-1eb59513-5e50-4118-83d6-2b2e90234cfd container secret-volume-test: STEP: delete the pod Jun 2 23:52:54.662: INFO: Waiting for pod pod-secrets-1eb59513-5e50-4118-83d6-2b2e90234cfd to disappear Jun 2 23:52:54.674: INFO: Pod pod-secrets-1eb59513-5e50-4118-83d6-2b2e90234cfd no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:52:54.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4232" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":66,"skipped":1069,"failed":0} ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:52:54.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 2 23:52:54.785: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4581 /api/v1/namespaces/watch-4581/configmaps/e2e-watch-test-label-changed cf9c6edb-f7cd-43ae-9d79-b964b33b3f65 9796848 0 2020-06-02 23:52:54 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-02 23:52:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 2 23:52:54.785: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4581 /api/v1/namespaces/watch-4581/configmaps/e2e-watch-test-label-changed cf9c6edb-f7cd-43ae-9d79-b964b33b3f65 9796849 0 2020-06-02 23:52:54 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-02 23:52:54 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 2 23:52:54.785: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4581 /api/v1/namespaces/watch-4581/configmaps/e2e-watch-test-label-changed cf9c6edb-f7cd-43ae-9d79-b964b33b3f65 9796850 0 2020-06-02 23:52:54 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-02 23:52:54 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 2 23:53:04.832: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4581 /api/v1/namespaces/watch-4581/configmaps/e2e-watch-test-label-changed cf9c6edb-f7cd-43ae-9d79-b964b33b3f65 9796899 0 2020-06-02 23:52:54 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-02 23:53:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 2 23:53:04.832: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4581 /api/v1/namespaces/watch-4581/configmaps/e2e-watch-test-label-changed cf9c6edb-f7cd-43ae-9d79-b964b33b3f65 9796900 0 2020-06-02 23:52:54 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-02 23:53:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 2 23:53:04.833: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4581 /api/v1/namespaces/watch-4581/configmaps/e2e-watch-test-label-changed cf9c6edb-f7cd-43ae-9d79-b964b33b3f65 9796901 0 2020-06-02 23:52:54 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-02 23:53:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:53:04.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4581" for this suite. • [SLOW TEST:10.180 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":67,"skipped":1069,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:53:04.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-e316a3d7-99a7-43e4-9de5-8746b34fb8e3 in namespace container-probe-8975 Jun 2 23:53:08.951: INFO: Started pod busybox-e316a3d7-99a7-43e4-9de5-8746b34fb8e3 in namespace container-probe-8975 STEP: checking the pod's current state and verifying that restartCount is present Jun 2 23:53:08.955: INFO: Initial restart count of pod busybox-e316a3d7-99a7-43e4-9de5-8746b34fb8e3 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:57:09.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8975" for this suite. • [SLOW TEST:244.724 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":68,"skipped":1082,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:57:09.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 2 23:57:14.349: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:57:14.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8632" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":69,"skipped":1100,"failed":0} SSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:57:14.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 2 23:57:14.538: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:57:18.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3689" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":70,"skipped":1104,"failed":0} SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:57:18.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Jun 2 23:57:19.296: INFO: created pod pod-service-account-defaultsa Jun 2 23:57:19.296: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 2 23:57:19.353: INFO: created pod pod-service-account-mountsa Jun 2 23:57:19.353: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 2 23:57:19.383: INFO: created pod pod-service-account-nomountsa Jun 2 23:57:19.383: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 2 23:57:19.399: INFO: created pod pod-service-account-defaultsa-mountspec Jun 2 23:57:19.399: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 2 23:57:19.440: INFO: created pod pod-service-account-mountsa-mountspec Jun 2 23:57:19.440: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 2 23:57:19.485: INFO: created pod pod-service-account-nomountsa-mountspec Jun 2 23:57:19.485: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 2 23:57:19.511: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 2 23:57:19.511: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 2 23:57:19.549: INFO: created pod pod-service-account-mountsa-nomountspec Jun 2 23:57:19.549: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 2 23:57:19.665: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 2 23:57:19.665: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:57:19.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4500" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":71,"skipped":1107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:57:19.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Jun 2 23:57:19.896: INFO: Waiting up to 5m0s for pod "var-expansion-a757b9d3-dc5a-4680-9ca7-7ca1e612c681" in namespace "var-expansion-8990" to be "Succeeded or Failed" Jun 2 23:57:19.946: INFO: Pod "var-expansion-a757b9d3-dc5a-4680-9ca7-7ca1e612c681": Phase="Pending", Reason="", readiness=false. Elapsed: 49.728528ms Jun 2 23:57:22.030: INFO: Pod "var-expansion-a757b9d3-dc5a-4680-9ca7-7ca1e612c681": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133813514s Jun 2 23:57:24.043: INFO: Pod "var-expansion-a757b9d3-dc5a-4680-9ca7-7ca1e612c681": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146497527s Jun 2 23:57:26.468: INFO: Pod "var-expansion-a757b9d3-dc5a-4680-9ca7-7ca1e612c681": Phase="Pending", Reason="", readiness=false. Elapsed: 6.571981359s Jun 2 23:57:28.629: INFO: Pod "var-expansion-a757b9d3-dc5a-4680-9ca7-7ca1e612c681": Phase="Pending", Reason="", readiness=false. Elapsed: 8.732441397s Jun 2 23:57:30.946: INFO: Pod "var-expansion-a757b9d3-dc5a-4680-9ca7-7ca1e612c681": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.050082269s STEP: Saw pod success Jun 2 23:57:30.946: INFO: Pod "var-expansion-a757b9d3-dc5a-4680-9ca7-7ca1e612c681" satisfied condition "Succeeded or Failed" Jun 2 23:57:31.162: INFO: Trying to get logs from node latest-worker pod var-expansion-a757b9d3-dc5a-4680-9ca7-7ca1e612c681 container dapi-container: STEP: delete the pod Jun 2 23:57:31.534: INFO: Waiting for pod var-expansion-a757b9d3-dc5a-4680-9ca7-7ca1e612c681 to disappear Jun 2 23:57:31.541: INFO: Pod var-expansion-a757b9d3-dc5a-4680-9ca7-7ca1e612c681 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:57:31.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8990" for this suite. • [SLOW TEST:11.788 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":72,"skipped":1139,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:57:31.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 2 23:57:38.784: INFO: Successfully updated pod "pod-update-090ad915-442e-4855-9a72-31e868fce853" STEP: verifying the updated pod is in kubernetes Jun 2 23:57:38.790: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:57:38.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9969" for this suite. • [SLOW TEST:7.251 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":73,"skipped":1160,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:57:38.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:57:50.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3274" for this suite. • [SLOW TEST:11.963 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":74,"skipped":1163,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:57:50.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:58:07.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7062" for this suite. • [SLOW TEST:16.360 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":75,"skipped":1178,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:58:07.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-900c984a-64a6-4f67-9ea4-32d97d814cbf STEP: Creating a pod to test consume configMaps Jun 2 23:58:07.221: INFO: Waiting up to 5m0s for pod "pod-configmaps-8ee86a2c-03b9-4886-9469-eef35b739d8f" in namespace "configmap-3731" to be "Succeeded or Failed" Jun 2 23:58:07.236: INFO: Pod "pod-configmaps-8ee86a2c-03b9-4886-9469-eef35b739d8f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.679519ms Jun 2 23:58:09.326: INFO: Pod "pod-configmaps-8ee86a2c-03b9-4886-9469-eef35b739d8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105381526s Jun 2 23:58:11.330: INFO: Pod "pod-configmaps-8ee86a2c-03b9-4886-9469-eef35b739d8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109439319s STEP: Saw pod success Jun 2 23:58:11.330: INFO: Pod "pod-configmaps-8ee86a2c-03b9-4886-9469-eef35b739d8f" satisfied condition "Succeeded or Failed" Jun 2 23:58:11.333: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-8ee86a2c-03b9-4886-9469-eef35b739d8f container configmap-volume-test: STEP: delete the pod Jun 2 23:58:11.392: INFO: Waiting for pod pod-configmaps-8ee86a2c-03b9-4886-9469-eef35b739d8f to disappear Jun 2 23:58:11.695: INFO: Pod pod-configmaps-8ee86a2c-03b9-4886-9469-eef35b739d8f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:58:11.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3731" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":76,"skipped":1190,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:58:11.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-2b887ff0-0ea1-42c3-9667-0ff974642c13 STEP: Creating a pod to test consume secrets Jun 2 23:58:12.120: INFO: Waiting up to 5m0s for pod "pod-secrets-3b2128d4-c5d8-4753-97fd-0ca4b9f79297" in namespace "secrets-6213" to be "Succeeded or Failed" Jun 2 23:58:12.123: INFO: Pod "pod-secrets-3b2128d4-c5d8-4753-97fd-0ca4b9f79297": Phase="Pending", Reason="", readiness=false. Elapsed: 3.132842ms Jun 2 23:58:14.258: INFO: Pod "pod-secrets-3b2128d4-c5d8-4753-97fd-0ca4b9f79297": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137712648s Jun 2 23:58:16.262: INFO: Pod "pod-secrets-3b2128d4-c5d8-4753-97fd-0ca4b9f79297": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141965413s STEP: Saw pod success Jun 2 23:58:16.262: INFO: Pod "pod-secrets-3b2128d4-c5d8-4753-97fd-0ca4b9f79297" satisfied condition "Succeeded or Failed" Jun 2 23:58:16.265: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-3b2128d4-c5d8-4753-97fd-0ca4b9f79297 container secret-env-test: STEP: delete the pod Jun 2 23:58:16.372: INFO: Waiting for pod pod-secrets-3b2128d4-c5d8-4753-97fd-0ca4b9f79297 to disappear Jun 2 23:58:16.393: INFO: Pod pod-secrets-3b2128d4-c5d8-4753-97fd-0ca4b9f79297 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:58:16.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6213" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":77,"skipped":1207,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:58:16.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 2 23:58:24.568: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 2 23:58:24.573: INFO: Pod pod-with-prestop-exec-hook still exists Jun 2 23:58:26.573: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 2 23:58:26.578: INFO: Pod pod-with-prestop-exec-hook still exists Jun 2 23:58:28.573: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 2 23:58:28.578: INFO: Pod pod-with-prestop-exec-hook still exists Jun 2 23:58:30.573: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 2 23:58:30.578: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:58:30.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5513" for this suite. • [SLOW TEST:14.119 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":78,"skipped":1207,"failed":0} S ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:58:30.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:58:30.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-677" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":79,"skipped":1208,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:58:30.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 2 23:58:31.058: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"cf3bd78b-4eb1-4d62-9f9a-930ff535f026", Controller:(*bool)(0xc004a37212), BlockOwnerDeletion:(*bool)(0xc004a37213)}} Jun 2 23:58:31.116: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"0098df8a-4d4a-4a5d-b7c9-2799dc547e67", Controller:(*bool)(0xc004a373da), BlockOwnerDeletion:(*bool)(0xc004a373db)}} Jun 2 23:58:31.135: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"8624b17d-bf68-4d32-8f3b-a86e5f0653ea", Controller:(*bool)(0xc004a375ca), BlockOwnerDeletion:(*bool)(0xc004a375cb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:58:36.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6385" for this suite. • [SLOW TEST:5.488 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":80,"skipped":1235,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:58:36.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 2 23:58:36.841: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fe58f40a-0cd6-4cb7-af03-ed76b1aad14d" in namespace "projected-9312" to be "Succeeded or Failed" Jun 2 23:58:36.972: INFO: Pod "downwardapi-volume-fe58f40a-0cd6-4cb7-af03-ed76b1aad14d": Phase="Pending", Reason="", readiness=false. Elapsed: 130.968521ms Jun 2 23:58:38.976: INFO: Pod "downwardapi-volume-fe58f40a-0cd6-4cb7-af03-ed76b1aad14d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135584381s Jun 2 23:58:40.981: INFO: Pod "downwardapi-volume-fe58f40a-0cd6-4cb7-af03-ed76b1aad14d": Phase="Running", Reason="", readiness=true. Elapsed: 4.140450802s Jun 2 23:58:42.986: INFO: Pod "downwardapi-volume-fe58f40a-0cd6-4cb7-af03-ed76b1aad14d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.145032705s STEP: Saw pod success Jun 2 23:58:42.986: INFO: Pod "downwardapi-volume-fe58f40a-0cd6-4cb7-af03-ed76b1aad14d" satisfied condition "Succeeded or Failed" Jun 2 23:58:42.990: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-fe58f40a-0cd6-4cb7-af03-ed76b1aad14d container client-container: STEP: delete the pod Jun 2 23:58:43.012: INFO: Waiting for pod downwardapi-volume-fe58f40a-0cd6-4cb7-af03-ed76b1aad14d to disappear Jun 2 23:58:43.029: INFO: Pod downwardapi-volume-fe58f40a-0cd6-4cb7-af03-ed76b1aad14d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:58:43.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9312" for this suite. • [SLOW TEST:6.689 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":81,"skipped":1246,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:58:43.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 2 23:58:57.145: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9602 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 23:58:57.145: INFO: >>> kubeConfig: /root/.kube/config I0602 23:58:57.186913 7 log.go:172] (0xc005aa6f20) (0xc001d10fa0) Create stream I0602 23:58:57.186953 7 log.go:172] (0xc005aa6f20) (0xc001d10fa0) Stream added, broadcasting: 1 I0602 23:58:57.189058 7 log.go:172] (0xc005aa6f20) Reply frame received for 1 I0602 23:58:57.189094 7 log.go:172] (0xc005aa6f20) (0xc00201c000) Create stream I0602 23:58:57.189106 7 log.go:172] (0xc005aa6f20) (0xc00201c000) Stream added, broadcasting: 3 I0602 23:58:57.190382 7 log.go:172] (0xc005aa6f20) Reply frame received for 3 I0602 23:58:57.190422 7 log.go:172] (0xc005aa6f20) (0xc002095f40) Create stream I0602 23:58:57.190442 7 log.go:172] (0xc005aa6f20) (0xc002095f40) Stream added, broadcasting: 5 I0602 23:58:57.191532 7 log.go:172] (0xc005aa6f20) Reply frame received for 5 I0602 23:58:57.299051 7 log.go:172] (0xc005aa6f20) Data frame received for 5 I0602 23:58:57.299087 7 log.go:172] (0xc002095f40) (5) Data frame handling I0602 23:58:57.299106 7 log.go:172] (0xc005aa6f20) Data frame received for 3 I0602 23:58:57.299115 7 log.go:172] (0xc00201c000) (3) Data frame handling I0602 23:58:57.299126 7 log.go:172] (0xc00201c000) (3) Data frame sent I0602 23:58:57.299134 7 log.go:172] (0xc005aa6f20) Data frame received for 3 I0602 23:58:57.299143 7 log.go:172] (0xc00201c000) (3) Data frame handling I0602 23:58:57.300436 7 log.go:172] (0xc005aa6f20) Data frame received for 1 I0602 23:58:57.300465 7 log.go:172] (0xc001d10fa0) (1) Data frame handling I0602 23:58:57.300479 7 log.go:172] (0xc001d10fa0) (1) Data frame sent I0602 23:58:57.300491 7 log.go:172] (0xc005aa6f20) (0xc001d10fa0) Stream removed, broadcasting: 1 I0602 23:58:57.300521 7 log.go:172] (0xc005aa6f20) Go away received I0602 23:58:57.300685 7 log.go:172] (0xc005aa6f20) (0xc001d10fa0) Stream removed, broadcasting: 1 I0602 23:58:57.300705 7 log.go:172] (0xc005aa6f20) (0xc00201c000) Stream removed, broadcasting: 3 I0602 23:58:57.300715 7 log.go:172] (0xc005aa6f20) (0xc002095f40) Stream removed, broadcasting: 5 Jun 2 23:58:57.300: INFO: Exec stderr: "" Jun 2 23:58:57.300: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9602 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 23:58:57.300: INFO: >>> kubeConfig: /root/.kube/config I0602 23:58:57.335264 7 log.go:172] (0xc005aa7550) (0xc001d11180) Create stream I0602 23:58:57.335314 7 log.go:172] (0xc005aa7550) (0xc001d11180) Stream added, broadcasting: 1 I0602 23:58:57.337547 7 log.go:172] (0xc005aa7550) Reply frame received for 1 I0602 23:58:57.337575 7 log.go:172] (0xc005aa7550) (0xc0026f6140) Create stream I0602 23:58:57.337586 7 log.go:172] (0xc005aa7550) (0xc0026f6140) Stream added, broadcasting: 3 I0602 23:58:57.338554 7 log.go:172] (0xc005aa7550) Reply frame received for 3 I0602 23:58:57.338593 7 log.go:172] (0xc005aa7550) (0xc0026f61e0) Create stream I0602 23:58:57.338609 7 log.go:172] (0xc005aa7550) (0xc0026f61e0) Stream added, broadcasting: 5 I0602 23:58:57.339506 7 log.go:172] (0xc005aa7550) Reply frame received for 5 I0602 23:58:57.546591 7 log.go:172] (0xc005aa7550) Data frame received for 3 I0602 23:58:57.546621 7 log.go:172] (0xc0026f6140) (3) Data frame handling I0602 23:58:57.546634 7 log.go:172] (0xc0026f6140) (3) Data frame sent I0602 23:58:57.546645 7 log.go:172] (0xc005aa7550) Data frame received for 3 I0602 23:58:57.546654 7 log.go:172] (0xc0026f6140) (3) Data frame handling I0602 23:58:57.546673 7 log.go:172] (0xc005aa7550) Data frame received for 5 I0602 23:58:57.546683 7 log.go:172] (0xc0026f61e0) (5) Data frame handling I0602 23:58:57.548377 7 log.go:172] (0xc005aa7550) Data frame received for 1 I0602 23:58:57.548413 7 log.go:172] (0xc001d11180) (1) Data frame handling I0602 23:58:57.548489 7 log.go:172] (0xc001d11180) (1) Data frame sent I0602 23:58:57.548555 7 log.go:172] (0xc005aa7550) (0xc001d11180) Stream removed, broadcasting: 1 I0602 23:58:57.548639 7 log.go:172] (0xc005aa7550) Go away received I0602 23:58:57.548874 7 log.go:172] (0xc005aa7550) (0xc001d11180) Stream removed, broadcasting: 1 I0602 23:58:57.548887 7 log.go:172] (0xc005aa7550) (0xc0026f6140) Stream removed, broadcasting: 3 I0602 23:58:57.548893 7 log.go:172] (0xc005aa7550) (0xc0026f61e0) Stream removed, broadcasting: 5 Jun 2 23:58:57.548: INFO: Exec stderr: "" Jun 2 23:58:57.548: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9602 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 23:58:57.548: INFO: >>> kubeConfig: /root/.kube/config I0602 23:58:57.583963 7 log.go:172] (0xc002c656b0) (0xc0026f6500) Create stream I0602 23:58:57.584007 7 log.go:172] (0xc002c656b0) (0xc0026f6500) Stream added, broadcasting: 1 I0602 23:58:57.586001 7 log.go:172] (0xc002c656b0) Reply frame received for 1 I0602 23:58:57.586029 7 log.go:172] (0xc002c656b0) (0xc0026f65a0) Create stream I0602 23:58:57.586042 7 log.go:172] (0xc002c656b0) (0xc0026f65a0) Stream added, broadcasting: 3 I0602 23:58:57.587041 7 log.go:172] (0xc002c656b0) Reply frame received for 3 I0602 23:58:57.587083 7 log.go:172] (0xc002c656b0) (0xc001b65860) Create stream I0602 23:58:57.587091 7 log.go:172] (0xc002c656b0) (0xc001b65860) Stream added, broadcasting: 5 I0602 23:58:57.588128 7 log.go:172] (0xc002c656b0) Reply frame received for 5 I0602 23:58:57.632783 7 log.go:172] (0xc002c656b0) Data frame received for 5 I0602 23:58:57.632811 7 log.go:172] (0xc001b65860) (5) Data frame handling I0602 23:58:57.632830 7 log.go:172] (0xc002c656b0) Data frame received for 3 I0602 23:58:57.632837 7 log.go:172] (0xc0026f65a0) (3) Data frame handling I0602 23:58:57.632845 7 log.go:172] (0xc0026f65a0) (3) Data frame sent I0602 23:58:57.632853 7 log.go:172] (0xc002c656b0) Data frame received for 3 I0602 23:58:57.632860 7 log.go:172] (0xc0026f65a0) (3) Data frame handling I0602 23:58:57.634549 7 log.go:172] (0xc002c656b0) Data frame received for 1 I0602 23:58:57.634575 7 log.go:172] (0xc0026f6500) (1) Data frame handling I0602 23:58:57.634595 7 log.go:172] (0xc0026f6500) (1) Data frame sent I0602 23:58:57.634623 7 log.go:172] (0xc002c656b0) (0xc0026f6500) Stream removed, broadcasting: 1 I0602 23:58:57.634647 7 log.go:172] (0xc002c656b0) Go away received I0602 23:58:57.634770 7 log.go:172] (0xc002c656b0) (0xc0026f6500) Stream removed, broadcasting: 1 I0602 23:58:57.634800 7 log.go:172] (0xc002c656b0) (0xc0026f65a0) Stream removed, broadcasting: 3 I0602 23:58:57.634810 7 log.go:172] (0xc002c656b0) (0xc001b65860) Stream removed, broadcasting: 5 Jun 2 23:58:57.634: INFO: Exec stderr: "" Jun 2 23:58:57.634: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9602 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 23:58:57.634: INFO: >>> kubeConfig: /root/.kube/config I0602 23:58:57.667230 7 log.go:172] (0xc002c65ce0) (0xc0026f6780) Create stream I0602 23:58:57.667258 7 log.go:172] (0xc002c65ce0) (0xc0026f6780) Stream added, broadcasting: 1 I0602 23:58:57.669351 7 log.go:172] (0xc002c65ce0) Reply frame received for 1 I0602 23:58:57.669389 7 log.go:172] (0xc002c65ce0) (0xc00201c0a0) Create stream I0602 23:58:57.669403 7 log.go:172] (0xc002c65ce0) (0xc00201c0a0) Stream added, broadcasting: 3 I0602 23:58:57.670454 7 log.go:172] (0xc002c65ce0) Reply frame received for 3 I0602 23:58:57.670522 7 log.go:172] (0xc002c65ce0) (0xc00201c140) Create stream I0602 23:58:57.670546 7 log.go:172] (0xc002c65ce0) (0xc00201c140) Stream added, broadcasting: 5 I0602 23:58:57.671627 7 log.go:172] (0xc002c65ce0) Reply frame received for 5 I0602 23:58:57.731922 7 log.go:172] (0xc002c65ce0) Data frame received for 3 I0602 23:58:57.731973 7 log.go:172] (0xc00201c0a0) (3) Data frame handling I0602 23:58:57.731997 7 log.go:172] (0xc00201c0a0) (3) Data frame sent I0602 23:58:57.732013 7 log.go:172] (0xc002c65ce0) Data frame received for 3 I0602 23:58:57.732028 7 log.go:172] (0xc00201c0a0) (3) Data frame handling I0602 23:58:57.732046 7 log.go:172] (0xc002c65ce0) Data frame received for 5 I0602 23:58:57.732062 7 log.go:172] (0xc00201c140) (5) Data frame handling I0602 23:58:57.732953 7 log.go:172] (0xc002c65ce0) Data frame received for 1 I0602 23:58:57.733014 7 log.go:172] (0xc0026f6780) (1) Data frame handling I0602 23:58:57.733036 7 log.go:172] (0xc0026f6780) (1) Data frame sent I0602 23:58:57.733061 7 log.go:172] (0xc002c65ce0) (0xc0026f6780) Stream removed, broadcasting: 1 I0602 23:58:57.733088 7 log.go:172] (0xc002c65ce0) Go away received I0602 23:58:57.733437 7 log.go:172] (0xc002c65ce0) (0xc0026f6780) Stream removed, broadcasting: 1 I0602 23:58:57.733468 7 log.go:172] (0xc002c65ce0) (0xc00201c0a0) Stream removed, broadcasting: 3 I0602 23:58:57.733488 7 log.go:172] (0xc002c65ce0) (0xc00201c140) Stream removed, broadcasting: 5 Jun 2 23:58:57.733: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 2 23:58:57.733: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9602 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 23:58:57.733: INFO: >>> kubeConfig: /root/.kube/config I0602 23:58:57.767596 7 log.go:172] (0xc005aa7b80) (0xc001d114a0) Create stream I0602 23:58:57.767625 7 log.go:172] (0xc005aa7b80) (0xc001d114a0) Stream added, broadcasting: 1 I0602 23:58:57.769772 7 log.go:172] (0xc005aa7b80) Reply frame received for 1 I0602 23:58:57.769821 7 log.go:172] (0xc005aa7b80) (0xc001d115e0) Create stream I0602 23:58:57.769844 7 log.go:172] (0xc005aa7b80) (0xc001d115e0) Stream added, broadcasting: 3 I0602 23:58:57.770877 7 log.go:172] (0xc005aa7b80) Reply frame received for 3 I0602 23:58:57.770919 7 log.go:172] (0xc005aa7b80) (0xc001d117c0) Create stream I0602 23:58:57.770930 7 log.go:172] (0xc005aa7b80) (0xc001d117c0) Stream added, broadcasting: 5 I0602 23:58:57.771767 7 log.go:172] (0xc005aa7b80) Reply frame received for 5 I0602 23:58:57.836787 7 log.go:172] (0xc005aa7b80) Data frame received for 5 I0602 23:58:57.836824 7 log.go:172] (0xc005aa7b80) Data frame received for 3 I0602 23:58:57.836853 7 log.go:172] (0xc001d115e0) (3) Data frame handling I0602 23:58:57.836871 7 log.go:172] (0xc001d115e0) (3) Data frame sent I0602 23:58:57.836914 7 log.go:172] (0xc001d117c0) (5) Data frame handling I0602 23:58:57.836994 7 log.go:172] (0xc005aa7b80) Data frame received for 3 I0602 23:58:57.837029 7 log.go:172] (0xc001d115e0) (3) Data frame handling I0602 23:58:57.839105 7 log.go:172] (0xc005aa7b80) Data frame received for 1 I0602 23:58:57.839142 7 log.go:172] (0xc001d114a0) (1) Data frame handling I0602 23:58:57.839166 7 log.go:172] (0xc001d114a0) (1) Data frame sent I0602 23:58:57.839182 7 log.go:172] (0xc005aa7b80) (0xc001d114a0) Stream removed, broadcasting: 1 I0602 23:58:57.839206 7 log.go:172] (0xc005aa7b80) Go away received I0602 23:58:57.839325 7 log.go:172] (0xc005aa7b80) (0xc001d114a0) Stream removed, broadcasting: 1 I0602 23:58:57.839352 7 log.go:172] (0xc005aa7b80) (0xc001d115e0) Stream removed, broadcasting: 3 I0602 23:58:57.839359 7 log.go:172] (0xc005aa7b80) (0xc001d117c0) Stream removed, broadcasting: 5 Jun 2 23:58:57.839: INFO: Exec stderr: "" Jun 2 23:58:57.839: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9602 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 23:58:57.839: INFO: >>> kubeConfig: /root/.kube/config I0602 23:58:57.876970 7 log.go:172] (0xc001b18a50) (0xc002b63d60) Create stream I0602 23:58:57.877011 7 log.go:172] (0xc001b18a50) (0xc002b63d60) Stream added, broadcasting: 1 I0602 23:58:57.879557 7 log.go:172] (0xc001b18a50) Reply frame received for 1 I0602 23:58:57.879596 7 log.go:172] (0xc001b18a50) (0xc001b659a0) Create stream I0602 23:58:57.879615 7 log.go:172] (0xc001b18a50) (0xc001b659a0) Stream added, broadcasting: 3 I0602 23:58:57.880704 7 log.go:172] (0xc001b18a50) Reply frame received for 3 I0602 23:58:57.880760 7 log.go:172] (0xc001b18a50) (0xc002b63e00) Create stream I0602 23:58:57.880784 7 log.go:172] (0xc001b18a50) (0xc002b63e00) Stream added, broadcasting: 5 I0602 23:58:57.882236 7 log.go:172] (0xc001b18a50) Reply frame received for 5 I0602 23:58:57.951950 7 log.go:172] (0xc001b18a50) Data frame received for 3 I0602 23:58:57.952006 7 log.go:172] (0xc001b659a0) (3) Data frame handling I0602 23:58:57.952022 7 log.go:172] (0xc001b659a0) (3) Data frame sent I0602 23:58:57.952039 7 log.go:172] (0xc001b18a50) Data frame received for 3 I0602 23:58:57.952051 7 log.go:172] (0xc001b659a0) (3) Data frame handling I0602 23:58:57.952078 7 log.go:172] (0xc001b18a50) Data frame received for 5 I0602 23:58:57.952097 7 log.go:172] (0xc002b63e00) (5) Data frame handling I0602 23:58:57.953271 7 log.go:172] (0xc001b18a50) Data frame received for 1 I0602 23:58:57.953341 7 log.go:172] (0xc002b63d60) (1) Data frame handling I0602 23:58:57.953369 7 log.go:172] (0xc002b63d60) (1) Data frame sent I0602 23:58:57.953387 7 log.go:172] (0xc001b18a50) (0xc002b63d60) Stream removed, broadcasting: 1 I0602 23:58:57.953483 7 log.go:172] (0xc001b18a50) (0xc002b63d60) Stream removed, broadcasting: 1 I0602 23:58:57.953508 7 log.go:172] (0xc001b18a50) (0xc001b659a0) Stream removed, broadcasting: 3 I0602 23:58:57.953612 7 log.go:172] (0xc001b18a50) Go away received I0602 23:58:57.953728 7 log.go:172] (0xc001b18a50) (0xc002b63e00) Stream removed, broadcasting: 5 Jun 2 23:58:57.953: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 2 23:58:57.953: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9602 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 23:58:57.953: INFO: >>> kubeConfig: /root/.kube/config I0602 23:58:57.987295 7 log.go:172] (0xc0014164d0) (0xc001b65d60) Create stream I0602 23:58:57.987329 7 log.go:172] (0xc0014164d0) (0xc001b65d60) Stream added, broadcasting: 1 I0602 23:58:57.989472 7 log.go:172] (0xc0014164d0) Reply frame received for 1 I0602 23:58:57.989509 7 log.go:172] (0xc0014164d0) (0xc002b63ea0) Create stream I0602 23:58:57.989524 7 log.go:172] (0xc0014164d0) (0xc002b63ea0) Stream added, broadcasting: 3 I0602 23:58:57.990588 7 log.go:172] (0xc0014164d0) Reply frame received for 3 I0602 23:58:57.990763 7 log.go:172] (0xc0014164d0) (0xc00201c3c0) Create stream I0602 23:58:57.990781 7 log.go:172] (0xc0014164d0) (0xc00201c3c0) Stream added, broadcasting: 5 I0602 23:58:57.992146 7 log.go:172] (0xc0014164d0) Reply frame received for 5 I0602 23:58:58.044223 7 log.go:172] (0xc0014164d0) Data frame received for 3 I0602 23:58:58.044274 7 log.go:172] (0xc002b63ea0) (3) Data frame handling I0602 23:58:58.044308 7 log.go:172] (0xc002b63ea0) (3) Data frame sent I0602 23:58:58.044573 7 log.go:172] (0xc0014164d0) Data frame received for 3 I0602 23:58:58.044609 7 log.go:172] (0xc002b63ea0) (3) Data frame handling I0602 23:58:58.044825 7 log.go:172] (0xc0014164d0) Data frame received for 5 I0602 23:58:58.044844 7 log.go:172] (0xc00201c3c0) (5) Data frame handling I0602 23:58:58.046055 7 log.go:172] (0xc0014164d0) Data frame received for 1 I0602 23:58:58.046088 7 log.go:172] (0xc001b65d60) (1) Data frame handling I0602 23:58:58.046119 7 log.go:172] (0xc001b65d60) (1) Data frame sent I0602 23:58:58.046135 7 log.go:172] (0xc0014164d0) (0xc001b65d60) Stream removed, broadcasting: 1 I0602 23:58:58.046150 7 log.go:172] (0xc0014164d0) Go away received I0602 23:58:58.046354 7 log.go:172] (0xc0014164d0) (0xc001b65d60) Stream removed, broadcasting: 1 I0602 23:58:58.046387 7 log.go:172] (0xc0014164d0) (0xc002b63ea0) Stream removed, broadcasting: 3 I0602 23:58:58.046415 7 log.go:172] (0xc0014164d0) (0xc00201c3c0) Stream removed, broadcasting: 5 Jun 2 23:58:58.046: INFO: Exec stderr: "" Jun 2 23:58:58.046: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9602 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 23:58:58.046: INFO: >>> kubeConfig: /root/.kube/config I0602 23:58:58.074522 7 log.go:172] (0xc001b191e0) (0xc002ce0140) Create stream I0602 23:58:58.074547 7 log.go:172] (0xc001b191e0) (0xc002ce0140) Stream added, broadcasting: 1 I0602 23:58:58.076443 7 log.go:172] (0xc001b191e0) Reply frame received for 1 I0602 23:58:58.076476 7 log.go:172] (0xc001b191e0) (0xc002ce0280) Create stream I0602 23:58:58.076489 7 log.go:172] (0xc001b191e0) (0xc002ce0280) Stream added, broadcasting: 3 I0602 23:58:58.077817 7 log.go:172] (0xc001b191e0) Reply frame received for 3 I0602 23:58:58.077852 7 log.go:172] (0xc001b191e0) (0xc0026f6820) Create stream I0602 23:58:58.077866 7 log.go:172] (0xc001b191e0) (0xc0026f6820) Stream added, broadcasting: 5 I0602 23:58:58.078790 7 log.go:172] (0xc001b191e0) Reply frame received for 5 I0602 23:58:58.170275 7 log.go:172] (0xc001b191e0) Data frame received for 5 I0602 23:58:58.170305 7 log.go:172] (0xc0026f6820) (5) Data frame handling I0602 23:58:58.170582 7 log.go:172] (0xc001b191e0) Data frame received for 3 I0602 23:58:58.170605 7 log.go:172] (0xc002ce0280) (3) Data frame handling I0602 23:58:58.170624 7 log.go:172] (0xc002ce0280) (3) Data frame sent I0602 23:58:58.170638 7 log.go:172] (0xc001b191e0) Data frame received for 3 I0602 23:58:58.170645 7 log.go:172] (0xc002ce0280) (3) Data frame handling I0602 23:58:58.171659 7 log.go:172] (0xc001b191e0) Data frame received for 1 I0602 23:58:58.171673 7 log.go:172] (0xc002ce0140) (1) Data frame handling I0602 23:58:58.171681 7 log.go:172] (0xc002ce0140) (1) Data frame sent I0602 23:58:58.171691 7 log.go:172] (0xc001b191e0) (0xc002ce0140) Stream removed, broadcasting: 1 I0602 23:58:58.171702 7 log.go:172] (0xc001b191e0) Go away received I0602 23:58:58.171832 7 log.go:172] (0xc001b191e0) (0xc002ce0140) Stream removed, broadcasting: 1 I0602 23:58:58.171847 7 log.go:172] (0xc001b191e0) (0xc002ce0280) Stream removed, broadcasting: 3 I0602 23:58:58.171855 7 log.go:172] (0xc001b191e0) (0xc0026f6820) Stream removed, broadcasting: 5 Jun 2 23:58:58.171: INFO: Exec stderr: "" Jun 2 23:58:58.171: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9602 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 23:58:58.171: INFO: >>> kubeConfig: /root/.kube/config I0602 23:58:58.215196 7 log.go:172] (0xc001b198c0) (0xc002ce0500) Create stream I0602 23:58:58.215224 7 log.go:172] (0xc001b198c0) (0xc002ce0500) Stream added, broadcasting: 1 I0602 23:58:58.217420 7 log.go:172] (0xc001b198c0) Reply frame received for 1 I0602 23:58:58.217455 7 log.go:172] (0xc001b198c0) (0xc001b65e00) Create stream I0602 23:58:58.217467 7 log.go:172] (0xc001b198c0) (0xc001b65e00) Stream added, broadcasting: 3 I0602 23:58:58.218437 7 log.go:172] (0xc001b198c0) Reply frame received for 3 I0602 23:58:58.218466 7 log.go:172] (0xc001b198c0) (0xc001b65ea0) Create stream I0602 23:58:58.218478 7 log.go:172] (0xc001b198c0) (0xc001b65ea0) Stream added, broadcasting: 5 I0602 23:58:58.219462 7 log.go:172] (0xc001b198c0) Reply frame received for 5 I0602 23:58:58.304532 7 log.go:172] (0xc001b198c0) Data frame received for 5 I0602 23:58:58.304588 7 log.go:172] (0xc001b65ea0) (5) Data frame handling I0602 23:58:58.304623 7 log.go:172] (0xc001b198c0) Data frame received for 3 I0602 23:58:58.304637 7 log.go:172] (0xc001b65e00) (3) Data frame handling I0602 23:58:58.304657 7 log.go:172] (0xc001b65e00) (3) Data frame sent I0602 23:58:58.304679 7 log.go:172] (0xc001b198c0) Data frame received for 3 I0602 23:58:58.304705 7 log.go:172] (0xc001b65e00) (3) Data frame handling I0602 23:58:58.306363 7 log.go:172] (0xc001b198c0) Data frame received for 1 I0602 23:58:58.306421 7 log.go:172] (0xc002ce0500) (1) Data frame handling I0602 23:58:58.306455 7 log.go:172] (0xc002ce0500) (1) Data frame sent I0602 23:58:58.306486 7 log.go:172] (0xc001b198c0) (0xc002ce0500) Stream removed, broadcasting: 1 I0602 23:58:58.306535 7 log.go:172] (0xc001b198c0) Go away received I0602 23:58:58.306653 7 log.go:172] (0xc001b198c0) (0xc002ce0500) Stream removed, broadcasting: 1 I0602 23:58:58.306684 7 log.go:172] (0xc001b198c0) (0xc001b65e00) Stream removed, broadcasting: 3 I0602 23:58:58.306707 7 log.go:172] (0xc001b198c0) (0xc001b65ea0) Stream removed, broadcasting: 5 Jun 2 23:58:58.306: INFO: Exec stderr: "" Jun 2 23:58:58.306: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9602 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 23:58:58.306: INFO: >>> kubeConfig: /root/.kube/config I0602 23:58:58.371819 7 log.go:172] (0xc000fa6bb0) (0xc00201c820) Create stream I0602 23:58:58.371859 7 log.go:172] (0xc000fa6bb0) (0xc00201c820) Stream added, broadcasting: 1 I0602 23:58:58.374243 7 log.go:172] (0xc000fa6bb0) Reply frame received for 1 I0602 23:58:58.374283 7 log.go:172] (0xc000fa6bb0) (0xc001d11860) Create stream I0602 23:58:58.374300 7 log.go:172] (0xc000fa6bb0) (0xc001d11860) Stream added, broadcasting: 3 I0602 23:58:58.375151 7 log.go:172] (0xc000fa6bb0) Reply frame received for 3 I0602 23:58:58.375295 7 log.go:172] (0xc000fa6bb0) (0xc001d119a0) Create stream I0602 23:58:58.375349 7 log.go:172] (0xc000fa6bb0) (0xc001d119a0) Stream added, broadcasting: 5 I0602 23:58:58.376077 7 log.go:172] (0xc000fa6bb0) Reply frame received for 5 I0602 23:58:58.434316 7 log.go:172] (0xc000fa6bb0) Data frame received for 5 I0602 23:58:58.434352 7 log.go:172] (0xc001d119a0) (5) Data frame handling I0602 23:58:58.434375 7 log.go:172] (0xc000fa6bb0) Data frame received for 3 I0602 23:58:58.434385 7 log.go:172] (0xc001d11860) (3) Data frame handling I0602 23:58:58.434394 7 log.go:172] (0xc001d11860) (3) Data frame sent I0602 23:58:58.434405 7 log.go:172] (0xc000fa6bb0) Data frame received for 3 I0602 23:58:58.434412 7 log.go:172] (0xc001d11860) (3) Data frame handling I0602 23:58:58.435565 7 log.go:172] (0xc000fa6bb0) Data frame received for 1 I0602 23:58:58.435638 7 log.go:172] (0xc00201c820) (1) Data frame handling I0602 23:58:58.435667 7 log.go:172] (0xc00201c820) (1) Data frame sent I0602 23:58:58.435683 7 log.go:172] (0xc000fa6bb0) (0xc00201c820) Stream removed, broadcasting: 1 I0602 23:58:58.435709 7 log.go:172] (0xc000fa6bb0) Go away received I0602 23:58:58.435835 7 log.go:172] (0xc000fa6bb0) (0xc00201c820) Stream removed, broadcasting: 1 I0602 23:58:58.435857 7 log.go:172] (0xc000fa6bb0) (0xc001d11860) Stream removed, broadcasting: 3 I0602 23:58:58.435865 7 log.go:172] (0xc000fa6bb0) (0xc001d119a0) Stream removed, broadcasting: 5 Jun 2 23:58:58.435: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:58:58.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-9602" for this suite. • [SLOW TEST:15.409 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":82,"skipped":1272,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:58:58.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 2 23:58:58.537: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:58:58.569: INFO: Number of nodes with available pods: 0 Jun 2 23:58:58.569: INFO: Node latest-worker is running more than one daemon pod Jun 2 23:58:59.575: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:58:59.578: INFO: Number of nodes with available pods: 0 Jun 2 23:58:59.578: INFO: Node latest-worker is running more than one daemon pod Jun 2 23:59:00.816: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:00.819: INFO: Number of nodes with available pods: 0 Jun 2 23:59:00.819: INFO: Node latest-worker is running more than one daemon pod Jun 2 23:59:01.575: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:01.579: INFO: Number of nodes with available pods: 0 Jun 2 23:59:01.579: INFO: Node latest-worker is running more than one daemon pod Jun 2 23:59:02.575: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:02.579: INFO: Number of nodes with available pods: 1 Jun 2 23:59:02.579: INFO: Node latest-worker is running more than one daemon pod Jun 2 23:59:03.574: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:03.577: INFO: Number of nodes with available pods: 2 Jun 2 23:59:03.577: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 2 23:59:03.624: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:03.653: INFO: Number of nodes with available pods: 1 Jun 2 23:59:03.653: INFO: Node latest-worker2 is running more than one daemon pod Jun 2 23:59:04.658: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:04.661: INFO: Number of nodes with available pods: 1 Jun 2 23:59:04.661: INFO: Node latest-worker2 is running more than one daemon pod Jun 2 23:59:05.660: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:05.664: INFO: Number of nodes with available pods: 1 Jun 2 23:59:05.664: INFO: Node latest-worker2 is running more than one daemon pod Jun 2 23:59:06.659: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:06.663: INFO: Number of nodes with available pods: 1 Jun 2 23:59:06.663: INFO: Node latest-worker2 is running more than one daemon pod Jun 2 23:59:07.660: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:07.664: INFO: Number of nodes with available pods: 1 Jun 2 23:59:07.664: INFO: Node latest-worker2 is running more than one daemon pod Jun 2 23:59:08.659: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:08.663: INFO: Number of nodes with available pods: 1 Jun 2 23:59:08.663: INFO: Node latest-worker2 is running more than one daemon pod Jun 2 23:59:09.660: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:09.665: INFO: Number of nodes with available pods: 1 Jun 2 23:59:09.666: INFO: Node latest-worker2 is running more than one daemon pod Jun 2 23:59:10.659: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:10.663: INFO: Number of nodes with available pods: 1 Jun 2 23:59:10.663: INFO: Node latest-worker2 is running more than one daemon pod Jun 2 23:59:11.660: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:11.664: INFO: Number of nodes with available pods: 1 Jun 2 23:59:11.664: INFO: Node latest-worker2 is running more than one daemon pod Jun 2 23:59:12.659: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:12.663: INFO: Number of nodes with available pods: 1 Jun 2 23:59:12.663: INFO: Node latest-worker2 is running more than one daemon pod Jun 2 23:59:13.659: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:13.663: INFO: Number of nodes with available pods: 1 Jun 2 23:59:13.663: INFO: Node latest-worker2 is running more than one daemon pod Jun 2 23:59:14.659: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:14.663: INFO: Number of nodes with available pods: 1 Jun 2 23:59:14.663: INFO: Node latest-worker2 is running more than one daemon pod Jun 2 23:59:15.660: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:15.664: INFO: Number of nodes with available pods: 1 Jun 2 23:59:15.664: INFO: Node latest-worker2 is running more than one daemon pod Jun 2 23:59:16.658: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:16.662: INFO: Number of nodes with available pods: 1 Jun 2 23:59:16.662: INFO: Node latest-worker2 is running more than one daemon pod Jun 2 23:59:17.787: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:17.791: INFO: Number of nodes with available pods: 1 Jun 2 23:59:17.791: INFO: Node latest-worker2 is running more than one daemon pod Jun 2 23:59:18.666: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:18.669: INFO: Number of nodes with available pods: 1 Jun 2 23:59:18.669: INFO: Node latest-worker2 is running more than one daemon pod Jun 2 23:59:19.660: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 23:59:19.663: INFO: Number of nodes with available pods: 2 Jun 2 23:59:19.663: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6354, will wait for the garbage collector to delete the pods Jun 2 23:59:19.727: INFO: Deleting DaemonSet.extensions daemon-set took: 5.653092ms Jun 2 23:59:20.127: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.301222ms Jun 2 23:59:25.330: INFO: Number of nodes with available pods: 0 Jun 2 23:59:25.330: INFO: Number of running nodes: 0, number of available pods: 0 Jun 2 23:59:25.362: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6354/daemonsets","resourceVersion":"9798585"},"items":null} Jun 2 23:59:25.365: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6354/pods","resourceVersion":"9798585"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:59:25.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6354" for this suite. • [SLOW TEST:26.972 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":83,"skipped":1288,"failed":0} S ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:59:25.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 2 23:59:25.478: INFO: Creating deployment "webserver-deployment" Jun 2 23:59:25.522: INFO: Waiting for observed generation 1 Jun 2 23:59:27.666: INFO: Waiting for all required pods to come up Jun 2 23:59:27.670: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 2 23:59:37.695: INFO: Waiting for deployment "webserver-deployment" to complete Jun 2 23:59:37.701: INFO: Updating deployment "webserver-deployment" with a non-existent image Jun 2 23:59:37.714: INFO: Updating deployment webserver-deployment Jun 2 23:59:37.714: INFO: Waiting for observed generation 2 Jun 2 23:59:40.090: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 2 23:59:40.092: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 2 23:59:40.183: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 2 23:59:40.534: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 2 23:59:40.534: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 2 23:59:40.537: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 2 23:59:40.541: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jun 2 23:59:40.541: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jun 2 23:59:40.548: INFO: Updating deployment webserver-deployment Jun 2 23:59:40.548: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jun 2 23:59:40.932: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 2 23:59:40.984: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jun 2 23:59:41.298: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-1192 /apis/apps/v1/namespaces/deployment-1192/deployments/webserver-deployment da975109-2753-42c0-aed6-cae577fb2514 9798851 3 2020-06-02 23:59:25 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-06-02 23:59:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00355d5e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-06-02 23:59:38 +0000 UTC,LastTransitionTime:2020-06-02 23:59:25 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-06-02 23:59:40 +0000 UTC,LastTransitionTime:2020-06-02 23:59:40 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jun 2 23:59:41.372: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-1192 /apis/apps/v1/namespaces/deployment-1192/replicasets/webserver-deployment-6676bcd6d4 b72fbdc1-7507-426d-89b3-4d7ed5be9de3 9798896 3 2020-06-02 23:59:37 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment da975109-2753-42c0-aed6-cae577fb2514 0xc000a13f37 0xc000a13f38}] [] [{kube-controller-manager Update apps/v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da975109-2753-42c0-aed6-cae577fb2514\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000a13fb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 2 23:59:41.372: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jun 2 23:59:41.372: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-1192 /apis/apps/v1/namespaces/deployment-1192/replicasets/webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 9798878 3 2020-06-02 23:59:25 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment da975109-2753-42c0-aed6-cae577fb2514 0xc003682017 0xc003682018}] [] [{kube-controller-manager Update apps/v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da975109-2753-42c0-aed6-cae577fb2514\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003682088 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jun 2 23:59:41.483: INFO: Pod "webserver-deployment-6676bcd6d4-8qd89" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-8qd89 webserver-deployment-6676bcd6d4- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-6676bcd6d4-8qd89 912cb518-4f99-450d-a738-02df56e073f1 9798876 0 2020-06-02 23:59:41 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b72fbdc1-7507-426d-89b3-4d7ed5be9de3 0xc0036b13e7 0xc0036b13e8}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b72fbdc1-7507-426d-89b3-4d7ed5be9de3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.484: INFO: Pod "webserver-deployment-6676bcd6d4-jt9p5" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-jt9p5 webserver-deployment-6676bcd6d4- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-6676bcd6d4-jt9p5 783df6be-c871-46f4-aa1e-ed50b9a87247 9798816 0 2020-06-02 23:59:38 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b72fbdc1-7507-426d-89b3-4d7ed5be9de3 0xc0036b1527 0xc0036b1528}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b72fbdc1-7507-426d-89b3-4d7ed5be9de3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-02 23:59:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-02 23:59:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.484: INFO: Pod "webserver-deployment-6676bcd6d4-kdjgx" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-kdjgx webserver-deployment-6676bcd6d4- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-6676bcd6d4-kdjgx f5a77109-1f40-477d-87e6-a0eb32d87634 9798880 0 2020-06-02 23:59:41 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b72fbdc1-7507-426d-89b3-4d7ed5be9de3 0xc0036b16d7 0xc0036b16d8}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b72fbdc1-7507-426d-89b3-4d7ed5be9de3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.484: INFO: Pod "webserver-deployment-6676bcd6d4-kqznt" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-kqznt webserver-deployment-6676bcd6d4- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-6676bcd6d4-kqznt d0d55912-f201-4cb2-aee6-852a66b194d5 9798879 0 2020-06-02 23:59:41 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b72fbdc1-7507-426d-89b3-4d7ed5be9de3 0xc0036b1817 0xc0036b1818}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b72fbdc1-7507-426d-89b3-4d7ed5be9de3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.484: INFO: Pod "webserver-deployment-6676bcd6d4-mdr8m" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mdr8m webserver-deployment-6676bcd6d4- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-6676bcd6d4-mdr8m f4aa644e-cc04-491c-91ac-5e02c119d870 9798895 0 2020-06-02 23:59:41 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b72fbdc1-7507-426d-89b3-4d7ed5be9de3 0xc0036b1957 0xc0036b1958}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b72fbdc1-7507-426d-89b3-4d7ed5be9de3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.485: INFO: Pod "webserver-deployment-6676bcd6d4-rdsk8" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-rdsk8 webserver-deployment-6676bcd6d4- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-6676bcd6d4-rdsk8 e67cf1b4-bb9b-4310-9fa2-76618065413f 9798855 0 2020-06-02 23:59:41 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b72fbdc1-7507-426d-89b3-4d7ed5be9de3 0xc0036b1aa7 0xc0036b1aa8}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b72fbdc1-7507-426d-89b3-4d7ed5be9de3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.485: INFO: Pod "webserver-deployment-6676bcd6d4-rjqvl" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-rjqvl webserver-deployment-6676bcd6d4- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-6676bcd6d4-rjqvl cdbb74a7-6e2f-4bd7-8927-5eb7b032233c 9798888 0 2020-06-02 23:59:41 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b72fbdc1-7507-426d-89b3-4d7ed5be9de3 0xc0036b1be7 0xc0036b1be8}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b72fbdc1-7507-426d-89b3-4d7ed5be9de3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.513: INFO: Pod "webserver-deployment-6676bcd6d4-sgwrt" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-sgwrt webserver-deployment-6676bcd6d4- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-6676bcd6d4-sgwrt 8069daf1-43b0-4f7e-b7e3-9330eef54afb 9798805 0 2020-06-02 23:59:37 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b72fbdc1-7507-426d-89b3-4d7ed5be9de3 0xc0036b1d27 0xc0036b1d28}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b72fbdc1-7507-426d-89b3-4d7ed5be9de3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-02 23:59:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-02 23:59:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.513: INFO: Pod "webserver-deployment-6676bcd6d4-tfdwv" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-tfdwv webserver-deployment-6676bcd6d4- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-6676bcd6d4-tfdwv 1a77d16a-a930-45d2-a5bb-8971af480af4 9798865 0 2020-06-02 23:59:41 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b72fbdc1-7507-426d-89b3-4d7ed5be9de3 0xc0036b1ee7 0xc0036b1ee8}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b72fbdc1-7507-426d-89b3-4d7ed5be9de3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.514: INFO: Pod "webserver-deployment-6676bcd6d4-twj4n" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-twj4n webserver-deployment-6676bcd6d4- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-6676bcd6d4-twj4n c6f8c275-beaf-4469-9f19-602194c06ecf 9798817 0 2020-06-02 23:59:38 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b72fbdc1-7507-426d-89b3-4d7ed5be9de3 0xc00321a027 0xc00321a028}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b72fbdc1-7507-426d-89b3-4d7ed5be9de3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-02 23:59:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-02 23:59:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.514: INFO: Pod "webserver-deployment-6676bcd6d4-xkn28" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-xkn28 webserver-deployment-6676bcd6d4- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-6676bcd6d4-xkn28 ac062106-3781-43c1-b5d1-4356d6181ada 9798849 0 2020-06-02 23:59:40 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b72fbdc1-7507-426d-89b3-4d7ed5be9de3 0xc00321a1d7 0xc00321a1d8}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b72fbdc1-7507-426d-89b3-4d7ed5be9de3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.514: INFO: Pod "webserver-deployment-6676bcd6d4-xmtsp" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-xmtsp webserver-deployment-6676bcd6d4- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-6676bcd6d4-xmtsp f029c576-35f3-4ffd-a7b4-378d90d01bbb 9798791 0 2020-06-02 23:59:37 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b72fbdc1-7507-426d-89b3-4d7ed5be9de3 0xc00321a327 0xc00321a328}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b72fbdc1-7507-426d-89b3-4d7ed5be9de3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-02 23:59:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-02 23:59:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.514: INFO: Pod "webserver-deployment-6676bcd6d4-z26j7" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-z26j7 webserver-deployment-6676bcd6d4- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-6676bcd6d4-z26j7 1b967a5e-4d56-40d0-84b3-4866ae78aa71 9798795 0 2020-06-02 23:59:37 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b72fbdc1-7507-426d-89b3-4d7ed5be9de3 0xc00321a4d7 0xc00321a4d8}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b72fbdc1-7507-426d-89b3-4d7ed5be9de3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-02 23:59:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-02 23:59:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.514: INFO: Pod "webserver-deployment-84855cf797-2zdck" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-2zdck webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-2zdck 0355af36-251b-4403-aed7-0dca71ac3773 9798908 0 2020-06-02 23:59:40 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00321a6a7 0xc00321a6a8}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-02 23:59:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.514: INFO: Pod "webserver-deployment-84855cf797-4z97g" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-4z97g webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-4z97g 0af87b8f-5db8-4420-b42d-d831f156eebf 9798676 0 2020-06-02 23:59:25 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00321a837 0xc00321a838}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-02 23:59:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.100\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.100,StartTime:2020-06-02 23:59:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-02 23:59:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cbdbeb9eb40eb2e0de851294d17f3e1f9ba900309f1a4bb7fae992cc49e05855,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.515: INFO: Pod "webserver-deployment-84855cf797-58khn" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-58khn webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-58khn 0961d2e4-15b4-4148-89f3-b30deb1d4206 9798718 0 2020-06-02 23:59:25 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00321a9e7 0xc00321a9e8}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-02 23:59:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.101\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.101,StartTime:2020-06-02 23:59:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-02 23:59:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2321dd94910eb35199d177b5d0aa9b81924e89e9c24842e82833fff98beabc91,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.101,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.515: INFO: Pod "webserver-deployment-84855cf797-7zbzb" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7zbzb webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-7zbzb 3e0e96cc-ff06-4a2c-8d6b-360ff3f02135 9798732 0 2020-06-02 23:59:25 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00321ab97 0xc00321ab98}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-02 23:59:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.104\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.104,StartTime:2020-06-02 23:59:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-02 23:59:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://aed46afff9c06535b0843f8edee7298aaacc8c9d272580189babf5381a41d30d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.104,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.515: INFO: Pod "webserver-deployment-84855cf797-8pd8r" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-8pd8r webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-8pd8r 99cae062-d0fe-480a-b03b-5499c9066735 9798910 0 2020-06-02 23:59:40 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00321ad57 0xc00321ad58}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-02 23:59:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.515: INFO: Pod "webserver-deployment-84855cf797-9wtc4" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-9wtc4 webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-9wtc4 867e8a9c-075d-458d-a6b0-bbe5a6c18296 9798871 0 2020-06-02 23:59:41 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00321aef7 0xc00321aef8}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.515: INFO: Pod "webserver-deployment-84855cf797-b99sk" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-b99sk webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-b99sk 86f4a451-2acc-45a6-be59-77626cac1a7c 9798721 0 2020-06-02 23:59:25 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00321b027 0xc00321b028}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-02 23:59:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.66\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.66,StartTime:2020-06-02 23:59:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-02 23:59:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://736824041948f44f2e2094cd23d3d9bb1fd1d19e28d583a61a642f36e04cdf9c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.515: INFO: Pod "webserver-deployment-84855cf797-btk2k" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-btk2k webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-btk2k 0bf85d60-8aa8-44bb-ac3c-6d90c35735e3 9798702 0 2020-06-02 23:59:25 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00321b1f7 0xc00321b1f8}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-02 23:59:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.64\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.64,StartTime:2020-06-02 23:59:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-02 23:59:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://48880adc8a8148db6872aad03a7bce5eb826bd662f33d94b0ca56567cba948c0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.515: INFO: Pod "webserver-deployment-84855cf797-d9vkr" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-d9vkr webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-d9vkr d5be84bc-c747-402d-9bab-6f2c721538bb 9798889 0 2020-06-02 23:59:41 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00321b3a7 0xc00321b3a8}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.516: INFO: Pod "webserver-deployment-84855cf797-dcqm2" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dcqm2 webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-dcqm2 29935360-6bb8-435d-95ac-6525e6f54d4d 9798872 0 2020-06-02 23:59:41 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00321b4d7 0xc00321b4d8}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.516: INFO: Pod "webserver-deployment-84855cf797-dd249" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dd249 webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-dd249 0221769b-f677-4a9b-8bb3-c4bf9d25585f 9798891 0 2020-06-02 23:59:41 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00321b607 0xc00321b608}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.516: INFO: Pod "webserver-deployment-84855cf797-hggwx" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-hggwx webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-hggwx e233bddc-9044-465e-ae67-db93eb2cd985 9798892 0 2020-06-02 23:59:41 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00321b737 0xc00321b738}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.516: INFO: Pod "webserver-deployment-84855cf797-kljnf" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-kljnf webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-kljnf 624f62f5-b81a-429e-9476-c513e7330fd5 9798882 0 2020-06-02 23:59:41 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00321b877 0xc00321b878}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.516: INFO: Pod "webserver-deployment-84855cf797-nklgj" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-nklgj webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-nklgj 49d126da-1117-4bbd-9dca-ca5b1541b5c1 9798731 0 2020-06-02 23:59:25 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00321b9a7 0xc00321b9a8}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-02 23:59:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.65\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.65,StartTime:2020-06-02 23:59:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-02 23:59:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f4bb2429049e896b2c4b7b39eec7b447117591a20ed53b67f27ce6832eb4a581,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.516: INFO: Pod "webserver-deployment-84855cf797-q5c7t" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-q5c7t webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-q5c7t a39d12cd-356d-4c57-9573-342a4c5dbee0 9798711 0 2020-06-02 23:59:25 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00321bb77 0xc00321bb78}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-02 23:59:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.102\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.102,StartTime:2020-06-02 23:59:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-02 23:59:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8d8bfe0c30c93693a96b8a46cf0cc5d570fdfdeda6b50a6b71ceb190937a293d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.102,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.516: INFO: Pod "webserver-deployment-84855cf797-rwnlv" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-rwnlv webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-rwnlv 5301477f-9e57-40a7-b725-b81b9c944b45 9798744 0 2020-06-02 23:59:25 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00321bd57 0xc00321bd58}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-02 23:59:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.67\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.67,StartTime:2020-06-02 23:59:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-02 23:59:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d0f119d018e5c580a36c3b445750262187fe688b27c8a081dd2ffea119fe222c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.516: INFO: Pod "webserver-deployment-84855cf797-tmwt9" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-tmwt9 webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-tmwt9 30965c51-86d7-443e-8a53-c66e1ca1d5a2 9798875 0 2020-06-02 23:59:40 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00321bf17 0xc00321bf18}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-02 23:59:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.517: INFO: Pod "webserver-deployment-84855cf797-vdwt6" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-vdwt6 webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-vdwt6 e91f7655-8f56-4947-a7e1-f1b2241a67af 9798890 0 2020-06-02 23:59:41 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00345e0b7 0xc00345e0b8}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.517: INFO: Pod "webserver-deployment-84855cf797-w7tqn" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-w7tqn webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-w7tqn d9e03fa5-fd92-4e49-81b2-416570addbbc 9798870 0 2020-06-02 23:59:41 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00345e1e7 0xc00345e1e8}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 2 23:59:41.517: INFO: Pod "webserver-deployment-84855cf797-wxsxx" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wxsxx webserver-deployment-84855cf797- deployment-1192 /api/v1/namespaces/deployment-1192/pods/webserver-deployment-84855cf797-wxsxx 833bf90d-22ae-4562-8618-8047e65a4272 9798863 0 2020-06-02 23:59:41 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 02a6f9d1-3a8a-452c-976d-8ab11ddabe3b 0xc00345e327 0xc00345e328}] [] [{kube-controller-manager Update v1 2020-06-02 23:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02a6f9d1-3a8a-452c-976d-8ab11ddabe3b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bc45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bc45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bc45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-02 23:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 2 23:59:41.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1192" for this suite. • [SLOW TEST:16.527 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":84,"skipped":1289,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 2 23:59:41.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 2 23:59:47.905: INFO: Pod name wrapped-volume-race-1988fdd6-60da-4fab-adb6-3e6c577f729f: Found 0 pods out of 5 Jun 2 23:59:53.311: INFO: Pod name wrapped-volume-race-1988fdd6-60da-4fab-adb6-3e6c577f729f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1988fdd6-60da-4fab-adb6-3e6c577f729f in namespace emptydir-wrapper-9978, will wait for the garbage collector to delete the pods Jun 3 00:00:11.069: INFO: Deleting ReplicationController wrapped-volume-race-1988fdd6-60da-4fab-adb6-3e6c577f729f took: 329.417588ms Jun 3 00:00:12.170: INFO: Terminating ReplicationController wrapped-volume-race-1988fdd6-60da-4fab-adb6-3e6c577f729f pods took: 1.100291218s STEP: Creating RC which spawns configmap-volume pods Jun 3 00:00:25.102: INFO: Pod name wrapped-volume-race-9a7a66de-ca19-43f5-a6c2-9e61c7c6b99d: Found 0 pods out of 5 Jun 3 00:00:30.112: INFO: Pod name wrapped-volume-race-9a7a66de-ca19-43f5-a6c2-9e61c7c6b99d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9a7a66de-ca19-43f5-a6c2-9e61c7c6b99d in namespace emptydir-wrapper-9978, will wait for the garbage collector to delete the pods Jun 3 00:00:46.360: INFO: Deleting ReplicationController wrapped-volume-race-9a7a66de-ca19-43f5-a6c2-9e61c7c6b99d took: 170.294191ms Jun 3 00:00:46.660: INFO: Terminating ReplicationController wrapped-volume-race-9a7a66de-ca19-43f5-a6c2-9e61c7c6b99d pods took: 300.251352ms STEP: Creating RC which spawns configmap-volume pods Jun 3 00:00:55.121: INFO: Pod name wrapped-volume-race-1cf1fdbb-6112-4378-adb0-03d8345e5326: Found 0 pods out of 5 Jun 3 00:01:00.130: INFO: Pod name wrapped-volume-race-1cf1fdbb-6112-4378-adb0-03d8345e5326: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1cf1fdbb-6112-4378-adb0-03d8345e5326 in namespace emptydir-wrapper-9978, will wait for the garbage collector to delete the pods Jun 3 00:01:14.333: INFO: Deleting ReplicationController wrapped-volume-race-1cf1fdbb-6112-4378-adb0-03d8345e5326 took: 6.167211ms Jun 3 00:01:14.734: INFO: Terminating ReplicationController wrapped-volume-race-1cf1fdbb-6112-4378-adb0-03d8345e5326 pods took: 400.287762ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:01:26.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9978" for this suite. • [SLOW TEST:104.875 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":85,"skipped":1296,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:01:26.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-2a0d9bcc-8786-4738-9c18-9ce27690a100 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:01:33.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-289" for this suite. • [SLOW TEST:6.283 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":86,"skipped":1300,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:01:33.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:01:49.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9551" for this suite. • [SLOW TEST:16.222 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":87,"skipped":1321,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:01:49.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-02c43306-348b-4c57-af09-b9556c5b1ff2 in namespace container-probe-5361 Jun 3 00:01:53.427: INFO: Started pod busybox-02c43306-348b-4c57-af09-b9556c5b1ff2 in namespace container-probe-5361 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 00:01:53.429: INFO: Initial restart count of pod busybox-02c43306-348b-4c57-af09-b9556c5b1ff2 is 0 Jun 3 00:02:45.645: INFO: Restart count of pod container-probe-5361/busybox-02c43306-348b-4c57-af09-b9556c5b1ff2 is now 1 (52.215594547s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:02:45.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5361" for this suite. • [SLOW TEST:56.359 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":88,"skipped":1329,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:02:45.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:02:49.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5352" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":89,"skipped":1340,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:02:49.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 3 00:02:49.876: INFO: Waiting up to 5m0s for pod "pod-554d9c08-d7b8-4856-b42b-a8e7270c77e3" in namespace "emptydir-1940" to be "Succeeded or Failed" Jun 3 00:02:49.880: INFO: Pod "pod-554d9c08-d7b8-4856-b42b-a8e7270c77e3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.190025ms Jun 3 00:02:51.884: INFO: Pod "pod-554d9c08-d7b8-4856-b42b-a8e7270c77e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007616285s Jun 3 00:02:53.889: INFO: Pod "pod-554d9c08-d7b8-4856-b42b-a8e7270c77e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012414527s STEP: Saw pod success Jun 3 00:02:53.889: INFO: Pod "pod-554d9c08-d7b8-4856-b42b-a8e7270c77e3" satisfied condition "Succeeded or Failed" Jun 3 00:02:53.892: INFO: Trying to get logs from node latest-worker pod pod-554d9c08-d7b8-4856-b42b-a8e7270c77e3 container test-container: STEP: delete the pod Jun 3 00:02:53.946: INFO: Waiting for pod pod-554d9c08-d7b8-4856-b42b-a8e7270c77e3 to disappear Jun 3 00:02:53.958: INFO: Pod pod-554d9c08-d7b8-4856-b42b-a8e7270c77e3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:02:53.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1940" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":90,"skipped":1343,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:02:53.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:02:54.051: INFO: Create a RollingUpdate DaemonSet Jun 3 00:02:54.055: INFO: Check that daemon pods launch on every node of the cluster Jun 3 00:02:54.076: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:02:54.083: INFO: Number of nodes with available pods: 0 Jun 3 00:02:54.083: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:02:55.103: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:02:55.105: INFO: Number of nodes with available pods: 0 Jun 3 00:02:55.105: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:02:56.471: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:02:56.474: INFO: Number of nodes with available pods: 0 Jun 3 00:02:56.474: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:02:57.172: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:02:57.198: INFO: Number of nodes with available pods: 0 Jun 3 00:02:57.198: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:02:58.089: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:02:58.092: INFO: Number of nodes with available pods: 0 Jun 3 00:02:58.092: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:02:59.105: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:02:59.108: INFO: Number of nodes with available pods: 2 Jun 3 00:02:59.108: INFO: Number of running nodes: 2, number of available pods: 2 Jun 3 00:02:59.108: INFO: Update the DaemonSet to trigger a rollout Jun 3 00:02:59.116: INFO: Updating DaemonSet daemon-set Jun 3 00:03:05.157: INFO: Roll back the DaemonSet before rollout is complete Jun 3 00:03:05.165: INFO: Updating DaemonSet daemon-set Jun 3 00:03:05.165: INFO: Make sure DaemonSet rollback is complete Jun 3 00:03:05.219: INFO: Wrong image for pod: daemon-set-sh894. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 3 00:03:05.219: INFO: Pod daemon-set-sh894 is not available Jun 3 00:03:05.223: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:03:06.227: INFO: Wrong image for pod: daemon-set-sh894. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 3 00:03:06.228: INFO: Pod daemon-set-sh894 is not available Jun 3 00:03:06.231: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:03:07.350: INFO: Wrong image for pod: daemon-set-sh894. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 3 00:03:07.350: INFO: Pod daemon-set-sh894 is not available Jun 3 00:03:07.383: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:03:08.228: INFO: Pod daemon-set-6lzkj is not available Jun 3 00:03:08.233: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5377, will wait for the garbage collector to delete the pods Jun 3 00:03:08.299: INFO: Deleting DaemonSet.extensions daemon-set took: 6.452876ms Jun 3 00:03:08.599: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.247627ms Jun 3 00:03:12.303: INFO: Number of nodes with available pods: 0 Jun 3 00:03:12.303: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 00:03:12.305: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5377/daemonsets","resourceVersion":"9800847"},"items":null} Jun 3 00:03:12.308: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5377/pods","resourceVersion":"9800847"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:03:12.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5377" for this suite. • [SLOW TEST:18.359 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":91,"skipped":1354,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:03:12.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 3 00:03:12.422: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3458 /api/v1/namespaces/watch-3458/configmaps/e2e-watch-test-watch-closed 41cc0f0b-86db-4631-8dcc-217607510307 9800853 0 2020-06-03 00:03:12 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-06-03 00:03:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 00:03:12.422: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3458 /api/v1/namespaces/watch-3458/configmaps/e2e-watch-test-watch-closed 41cc0f0b-86db-4631-8dcc-217607510307 9800854 0 2020-06-03 00:03:12 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-06-03 00:03:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 3 00:03:12.444: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3458 /api/v1/namespaces/watch-3458/configmaps/e2e-watch-test-watch-closed 41cc0f0b-86db-4631-8dcc-217607510307 9800855 0 2020-06-03 00:03:12 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-06-03 00:03:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 00:03:12.444: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3458 /api/v1/namespaces/watch-3458/configmaps/e2e-watch-test-watch-closed 41cc0f0b-86db-4631-8dcc-217607510307 9800856 0 2020-06-03 00:03:12 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-06-03 00:03:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:03:12.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3458" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":92,"skipped":1365,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:03:12.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:03:12.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-880" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":93,"skipped":1390,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:03:12.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 3 00:03:12.776: INFO: Waiting up to 5m0s for pod "downwardapi-volume-396bd3fa-a994-4cba-be52-dd83b59d69be" in namespace "downward-api-4689" to be "Succeeded or Failed" Jun 3 00:03:12.779: INFO: Pod "downwardapi-volume-396bd3fa-a994-4cba-be52-dd83b59d69be": Phase="Pending", Reason="", readiness=false. Elapsed: 3.108033ms Jun 3 00:03:14.784: INFO: Pod "downwardapi-volume-396bd3fa-a994-4cba-be52-dd83b59d69be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008366207s Jun 3 00:03:16.789: INFO: Pod "downwardapi-volume-396bd3fa-a994-4cba-be52-dd83b59d69be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013346024s STEP: Saw pod success Jun 3 00:03:16.789: INFO: Pod "downwardapi-volume-396bd3fa-a994-4cba-be52-dd83b59d69be" satisfied condition "Succeeded or Failed" Jun 3 00:03:16.793: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-396bd3fa-a994-4cba-be52-dd83b59d69be container client-container: STEP: delete the pod Jun 3 00:03:16.838: INFO: Waiting for pod downwardapi-volume-396bd3fa-a994-4cba-be52-dd83b59d69be to disappear Jun 3 00:03:16.864: INFO: Pod downwardapi-volume-396bd3fa-a994-4cba-be52-dd83b59d69be no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:03:16.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4689" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":94,"skipped":1397,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:03:16.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jun 3 00:03:17.037: INFO: Created pod &Pod{ObjectMeta:{dns-4312 dns-4312 /api/v1/namespaces/dns-4312/pods/dns-4312 e9ac8b9e-2b74-4f0c-b928-7576329a20a5 9800894 0 2020-06-03 00:03:17 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-06-03 00:03:17 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n8rf6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n8rf6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n8rf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 00:03:17.043: INFO: The status of Pod dns-4312 is Pending, waiting for it to be Running (with Ready = true) Jun 3 00:03:19.047: INFO: The status of Pod dns-4312 is Pending, waiting for it to be Running (with Ready = true) Jun 3 00:03:21.046: INFO: The status of Pod dns-4312 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Jun 3 00:03:21.047: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4312 PodName:dns-4312 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 00:03:21.047: INFO: >>> kubeConfig: /root/.kube/config I0603 00:03:21.084704 7 log.go:172] (0xc001416630) (0xc0015e6640) Create stream I0603 00:03:21.084754 7 log.go:172] (0xc001416630) (0xc0015e6640) Stream added, broadcasting: 1 I0603 00:03:21.087247 7 log.go:172] (0xc001416630) Reply frame received for 1 I0603 00:03:21.087289 7 log.go:172] (0xc001416630) (0xc002549040) Create stream I0603 00:03:21.087302 7 log.go:172] (0xc001416630) (0xc002549040) Stream added, broadcasting: 3 I0603 00:03:21.088463 7 log.go:172] (0xc001416630) Reply frame received for 3 I0603 00:03:21.088503 7 log.go:172] (0xc001416630) (0xc002549180) Create stream I0603 00:03:21.088518 7 log.go:172] (0xc001416630) (0xc002549180) Stream added, broadcasting: 5 I0603 00:03:21.089668 7 log.go:172] (0xc001416630) Reply frame received for 5 I0603 00:03:21.192008 7 log.go:172] (0xc001416630) Data frame received for 3 I0603 00:03:21.192052 7 log.go:172] (0xc002549040) (3) Data frame handling I0603 00:03:21.192083 7 log.go:172] (0xc002549040) (3) Data frame sent I0603 00:03:21.193622 7 log.go:172] (0xc001416630) Data frame received for 3 I0603 00:03:21.193653 7 log.go:172] (0xc002549040) (3) Data frame handling I0603 00:03:21.193975 7 log.go:172] (0xc001416630) Data frame received for 5 I0603 00:03:21.193995 7 log.go:172] (0xc002549180) (5) Data frame handling I0603 00:03:21.195708 7 log.go:172] (0xc001416630) Data frame received for 1 I0603 00:03:21.195732 7 log.go:172] (0xc0015e6640) (1) Data frame handling I0603 00:03:21.195749 7 log.go:172] (0xc0015e6640) (1) Data frame sent I0603 00:03:21.195770 7 log.go:172] (0xc001416630) (0xc0015e6640) Stream removed, broadcasting: 1 I0603 00:03:21.195796 7 log.go:172] (0xc001416630) Go away received I0603 00:03:21.196102 7 log.go:172] (0xc001416630) (0xc0015e6640) Stream removed, broadcasting: 1 I0603 00:03:21.196126 7 log.go:172] (0xc001416630) (0xc002549040) Stream removed, broadcasting: 3 I0603 00:03:21.196139 7 log.go:172] (0xc001416630) (0xc002549180) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Jun 3 00:03:21.196: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4312 PodName:dns-4312 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 00:03:21.196: INFO: >>> kubeConfig: /root/.kube/config I0603 00:03:21.225813 7 log.go:172] (0xc001416c60) (0xc0015e6a00) Create stream I0603 00:03:21.225843 7 log.go:172] (0xc001416c60) (0xc0015e6a00) Stream added, broadcasting: 1 I0603 00:03:21.227360 7 log.go:172] (0xc001416c60) Reply frame received for 1 I0603 00:03:21.227388 7 log.go:172] (0xc001416c60) (0xc0026a03c0) Create stream I0603 00:03:21.227398 7 log.go:172] (0xc001416c60) (0xc0026a03c0) Stream added, broadcasting: 3 I0603 00:03:21.228078 7 log.go:172] (0xc001416c60) Reply frame received for 3 I0603 00:03:21.228107 7 log.go:172] (0xc001416c60) (0xc0015e6aa0) Create stream I0603 00:03:21.228117 7 log.go:172] (0xc001416c60) (0xc0015e6aa0) Stream added, broadcasting: 5 I0603 00:03:21.228870 7 log.go:172] (0xc001416c60) Reply frame received for 5 I0603 00:03:21.319904 7 log.go:172] (0xc001416c60) Data frame received for 3 I0603 00:03:21.319938 7 log.go:172] (0xc0026a03c0) (3) Data frame handling I0603 00:03:21.319965 7 log.go:172] (0xc0026a03c0) (3) Data frame sent I0603 00:03:21.322058 7 log.go:172] (0xc001416c60) Data frame received for 5 I0603 00:03:21.322090 7 log.go:172] (0xc0015e6aa0) (5) Data frame handling I0603 00:03:21.322298 7 log.go:172] (0xc001416c60) Data frame received for 3 I0603 00:03:21.322338 7 log.go:172] (0xc0026a03c0) (3) Data frame handling I0603 00:03:21.323931 7 log.go:172] (0xc001416c60) Data frame received for 1 I0603 00:03:21.323979 7 log.go:172] (0xc0015e6a00) (1) Data frame handling I0603 00:03:21.324004 7 log.go:172] (0xc0015e6a00) (1) Data frame sent I0603 00:03:21.324033 7 log.go:172] (0xc001416c60) (0xc0015e6a00) Stream removed, broadcasting: 1 I0603 00:03:21.324157 7 log.go:172] (0xc001416c60) Go away received I0603 00:03:21.324206 7 log.go:172] (0xc001416c60) (0xc0015e6a00) Stream removed, broadcasting: 1 I0603 00:03:21.324240 7 log.go:172] (0xc001416c60) (0xc0026a03c0) Stream removed, broadcasting: 3 I0603 00:03:21.324260 7 log.go:172] (0xc001416c60) (0xc0015e6aa0) Stream removed, broadcasting: 5 Jun 3 00:03:21.324: INFO: Deleting pod dns-4312... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:03:21.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4312" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":95,"skipped":1434,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:03:21.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-ad16c4d0-0484-4f59-b329-67cc2d2dd377 in namespace container-probe-7673 Jun 3 00:03:25.877: INFO: Started pod test-webserver-ad16c4d0-0484-4f59-b329-67cc2d2dd377 in namespace container-probe-7673 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 00:03:25.881: INFO: Initial restart count of pod test-webserver-ad16c4d0-0484-4f59-b329-67cc2d2dd377 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:07:26.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7673" for this suite. • [SLOW TEST:245.589 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":96,"skipped":1445,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:07:27.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:07:27.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1618" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":97,"skipped":1470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:07:27.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:07:32.293: INFO: Waiting up to 5m0s for pod "client-envvars-a750a618-6909-4a69-a38f-b5291141917d" in namespace "pods-174" to be "Succeeded or Failed" Jun 3 00:07:32.430: INFO: Pod "client-envvars-a750a618-6909-4a69-a38f-b5291141917d": Phase="Pending", Reason="", readiness=false. Elapsed: 137.144487ms Jun 3 00:07:34.544: INFO: Pod "client-envvars-a750a618-6909-4a69-a38f-b5291141917d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.251247605s Jun 3 00:07:36.548: INFO: Pod "client-envvars-a750a618-6909-4a69-a38f-b5291141917d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.2552587s Jun 3 00:07:38.553: INFO: Pod "client-envvars-a750a618-6909-4a69-a38f-b5291141917d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.260162004s STEP: Saw pod success Jun 3 00:07:38.553: INFO: Pod "client-envvars-a750a618-6909-4a69-a38f-b5291141917d" satisfied condition "Succeeded or Failed" Jun 3 00:07:38.557: INFO: Trying to get logs from node latest-worker2 pod client-envvars-a750a618-6909-4a69-a38f-b5291141917d container env3cont: STEP: delete the pod Jun 3 00:07:38.611: INFO: Waiting for pod client-envvars-a750a618-6909-4a69-a38f-b5291141917d to disappear Jun 3 00:07:38.615: INFO: Pod client-envvars-a750a618-6909-4a69-a38f-b5291141917d no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:07:38.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-174" for this suite. • [SLOW TEST:10.973 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":98,"skipped":1498,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:07:38.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Jun 3 00:07:39.275: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:07:54.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5037" for this suite. • [SLOW TEST:16.215 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":99,"skipped":1499,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:07:54.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 3 00:07:54.952: INFO: Waiting up to 5m0s for pod "downwardapi-volume-22330154-77a2-44fe-8056-f2fe2a874912" in namespace "projected-3106" to be "Succeeded or Failed" Jun 3 00:07:54.956: INFO: Pod "downwardapi-volume-22330154-77a2-44fe-8056-f2fe2a874912": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04202ms Jun 3 00:07:56.960: INFO: Pod "downwardapi-volume-22330154-77a2-44fe-8056-f2fe2a874912": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008300978s Jun 3 00:07:58.965: INFO: Pod "downwardapi-volume-22330154-77a2-44fe-8056-f2fe2a874912": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013292444s STEP: Saw pod success Jun 3 00:07:58.965: INFO: Pod "downwardapi-volume-22330154-77a2-44fe-8056-f2fe2a874912" satisfied condition "Succeeded or Failed" Jun 3 00:07:58.968: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-22330154-77a2-44fe-8056-f2fe2a874912 container client-container: STEP: delete the pod Jun 3 00:07:59.007: INFO: Waiting for pod downwardapi-volume-22330154-77a2-44fe-8056-f2fe2a874912 to disappear Jun 3 00:07:59.020: INFO: Pod downwardapi-volume-22330154-77a2-44fe-8056-f2fe2a874912 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:07:59.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3106" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":100,"skipped":1558,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:07:59.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-4754 STEP: creating replication controller nodeport-test in namespace services-4754 I0603 00:07:59.167297 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-4754, replica count: 2 I0603 00:08:02.217675 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 00:08:05.217864 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 00:08:05.217: INFO: Creating new exec pod Jun 3 00:08:10.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4754 execpod52tmn -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jun 3 00:08:13.158: INFO: stderr: "I0603 00:08:13.007162 532 log.go:172] (0xc000448000) (0xc00058ec80) Create stream\nI0603 00:08:13.007213 532 log.go:172] (0xc000448000) (0xc00058ec80) Stream added, broadcasting: 1\nI0603 00:08:13.009376 532 log.go:172] (0xc000448000) Reply frame received for 1\nI0603 00:08:13.009404 532 log.go:172] (0xc000448000) (0xc00058fc20) Create stream\nI0603 00:08:13.009412 532 log.go:172] (0xc000448000) (0xc00058fc20) Stream added, broadcasting: 3\nI0603 00:08:13.010476 532 log.go:172] (0xc000448000) Reply frame received for 3\nI0603 00:08:13.010529 532 log.go:172] (0xc000448000) (0xc000552500) Create stream\nI0603 00:08:13.010546 532 log.go:172] (0xc000448000) (0xc000552500) Stream added, broadcasting: 5\nI0603 00:08:13.011708 532 log.go:172] (0xc000448000) Reply frame received for 5\nI0603 00:08:13.134489 532 log.go:172] (0xc000448000) Data frame received for 5\nI0603 00:08:13.134520 532 log.go:172] (0xc000552500) (5) Data frame handling\nI0603 00:08:13.134541 532 log.go:172] (0xc000552500) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0603 00:08:13.144710 532 log.go:172] (0xc000448000) Data frame received for 5\nI0603 00:08:13.144761 532 log.go:172] (0xc000552500) (5) Data frame handling\nI0603 00:08:13.144785 532 log.go:172] (0xc000552500) (5) Data frame sent\nI0603 00:08:13.144822 532 log.go:172] (0xc000448000) Data frame received for 5\nI0603 00:08:13.144863 532 log.go:172] (0xc000552500) (5) Data frame handling\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0603 00:08:13.145427 532 log.go:172] (0xc000448000) Data frame received for 3\nI0603 00:08:13.145463 532 log.go:172] (0xc00058fc20) (3) Data frame handling\nI0603 00:08:13.147478 532 log.go:172] (0xc000448000) Data frame received for 1\nI0603 00:08:13.147514 532 log.go:172] (0xc00058ec80) (1) Data frame handling\nI0603 00:08:13.147530 532 log.go:172] (0xc00058ec80) (1) Data frame sent\nI0603 00:08:13.147551 532 log.go:172] (0xc000448000) (0xc00058ec80) Stream removed, broadcasting: 1\nI0603 00:08:13.147581 532 log.go:172] (0xc000448000) Go away received\nI0603 00:08:13.148137 532 log.go:172] (0xc000448000) (0xc00058ec80) Stream removed, broadcasting: 1\nI0603 00:08:13.148164 532 log.go:172] (0xc000448000) (0xc00058fc20) Stream removed, broadcasting: 3\nI0603 00:08:13.148180 532 log.go:172] (0xc000448000) (0xc000552500) Stream removed, broadcasting: 5\n" Jun 3 00:08:13.158: INFO: stdout: "" Jun 3 00:08:13.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4754 execpod52tmn -- /bin/sh -x -c nc -zv -t -w 2 10.106.162.199 80' Jun 3 00:08:13.387: INFO: stderr: "I0603 00:08:13.302900 567 log.go:172] (0xc000bcb1e0) (0xc000848f00) Create stream\nI0603 00:08:13.302955 567 log.go:172] (0xc000bcb1e0) (0xc000848f00) Stream added, broadcasting: 1\nI0603 00:08:13.305720 567 log.go:172] (0xc000bcb1e0) Reply frame received for 1\nI0603 00:08:13.305767 567 log.go:172] (0xc000bcb1e0) (0xc000b240a0) Create stream\nI0603 00:08:13.305786 567 log.go:172] (0xc000bcb1e0) (0xc000b240a0) Stream added, broadcasting: 3\nI0603 00:08:13.306995 567 log.go:172] (0xc000bcb1e0) Reply frame received for 3\nI0603 00:08:13.307024 567 log.go:172] (0xc000bcb1e0) (0xc000b24140) Create stream\nI0603 00:08:13.307035 567 log.go:172] (0xc000bcb1e0) (0xc000b24140) Stream added, broadcasting: 5\nI0603 00:08:13.307949 567 log.go:172] (0xc000bcb1e0) Reply frame received for 5\nI0603 00:08:13.378897 567 log.go:172] (0xc000bcb1e0) Data frame received for 3\nI0603 00:08:13.378947 567 log.go:172] (0xc000b240a0) (3) Data frame handling\nI0603 00:08:13.379140 567 log.go:172] (0xc000bcb1e0) Data frame received for 5\nI0603 00:08:13.379167 567 log.go:172] (0xc000b24140) (5) Data frame handling\nI0603 00:08:13.379187 567 log.go:172] (0xc000b24140) (5) Data frame sent\nI0603 00:08:13.379206 567 log.go:172] (0xc000bcb1e0) Data frame received for 5\nI0603 00:08:13.379223 567 log.go:172] (0xc000b24140) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.162.199 80\nConnection to 10.106.162.199 80 port [tcp/http] succeeded!\nI0603 00:08:13.380832 567 log.go:172] (0xc000bcb1e0) Data frame received for 1\nI0603 00:08:13.380849 567 log.go:172] (0xc000848f00) (1) Data frame handling\nI0603 00:08:13.380857 567 log.go:172] (0xc000848f00) (1) Data frame sent\nI0603 00:08:13.380871 567 log.go:172] (0xc000bcb1e0) (0xc000848f00) Stream removed, broadcasting: 1\nI0603 00:08:13.380957 567 log.go:172] (0xc000bcb1e0) Go away received\nI0603 00:08:13.381306 567 log.go:172] (0xc000bcb1e0) (0xc000848f00) Stream removed, broadcasting: 1\nI0603 00:08:13.381325 567 log.go:172] (0xc000bcb1e0) (0xc000b240a0) Stream removed, broadcasting: 3\nI0603 00:08:13.381333 567 log.go:172] (0xc000bcb1e0) (0xc000b24140) Stream removed, broadcasting: 5\n" Jun 3 00:08:13.387: INFO: stdout: "" Jun 3 00:08:13.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4754 execpod52tmn -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30846' Jun 3 00:08:13.637: INFO: stderr: "I0603 00:08:13.565603 586 log.go:172] (0xc000b91ad0) (0xc00061d4a0) Create stream\nI0603 00:08:13.565663 586 log.go:172] (0xc000b91ad0) (0xc00061d4a0) Stream added, broadcasting: 1\nI0603 00:08:13.570108 586 log.go:172] (0xc000b91ad0) Reply frame received for 1\nI0603 00:08:13.570154 586 log.go:172] (0xc000b91ad0) (0xc000688e60) Create stream\nI0603 00:08:13.570169 586 log.go:172] (0xc000b91ad0) (0xc000688e60) Stream added, broadcasting: 3\nI0603 00:08:13.571239 586 log.go:172] (0xc000b91ad0) Reply frame received for 3\nI0603 00:08:13.571293 586 log.go:172] (0xc000b91ad0) (0xc0005321e0) Create stream\nI0603 00:08:13.571317 586 log.go:172] (0xc000b91ad0) (0xc0005321e0) Stream added, broadcasting: 5\nI0603 00:08:13.572303 586 log.go:172] (0xc000b91ad0) Reply frame received for 5\nI0603 00:08:13.629693 586 log.go:172] (0xc000b91ad0) Data frame received for 3\nI0603 00:08:13.629730 586 log.go:172] (0xc000688e60) (3) Data frame handling\nI0603 00:08:13.629756 586 log.go:172] (0xc000b91ad0) Data frame received for 5\nI0603 00:08:13.629772 586 log.go:172] (0xc0005321e0) (5) Data frame handling\nI0603 00:08:13.629786 586 log.go:172] (0xc0005321e0) (5) Data frame sent\nI0603 00:08:13.629796 586 log.go:172] (0xc000b91ad0) Data frame received for 5\nI0603 00:08:13.629802 586 log.go:172] (0xc0005321e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30846\nConnection to 172.17.0.13 30846 port [tcp/30846] succeeded!\nI0603 00:08:13.630916 586 log.go:172] (0xc000b91ad0) Data frame received for 1\nI0603 00:08:13.630947 586 log.go:172] (0xc00061d4a0) (1) Data frame handling\nI0603 00:08:13.630963 586 log.go:172] (0xc00061d4a0) (1) Data frame sent\nI0603 00:08:13.630975 586 log.go:172] (0xc000b91ad0) (0xc00061d4a0) Stream removed, broadcasting: 1\nI0603 00:08:13.630998 586 log.go:172] (0xc000b91ad0) Go away received\nI0603 00:08:13.631340 586 log.go:172] (0xc000b91ad0) (0xc00061d4a0) Stream removed, broadcasting: 1\nI0603 00:08:13.631356 586 log.go:172] (0xc000b91ad0) (0xc000688e60) Stream removed, broadcasting: 3\nI0603 00:08:13.631364 586 log.go:172] (0xc000b91ad0) (0xc0005321e0) Stream removed, broadcasting: 5\n" Jun 3 00:08:13.637: INFO: stdout: "" Jun 3 00:08:13.637: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4754 execpod52tmn -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30846' Jun 3 00:08:13.869: INFO: stderr: "I0603 00:08:13.803352 609 log.go:172] (0xc00003ad10) (0xc00051adc0) Create stream\nI0603 00:08:13.803411 609 log.go:172] (0xc00003ad10) (0xc00051adc0) Stream added, broadcasting: 1\nI0603 00:08:13.806251 609 log.go:172] (0xc00003ad10) Reply frame received for 1\nI0603 00:08:13.806305 609 log.go:172] (0xc00003ad10) (0xc0006b2aa0) Create stream\nI0603 00:08:13.806322 609 log.go:172] (0xc00003ad10) (0xc0006b2aa0) Stream added, broadcasting: 3\nI0603 00:08:13.807369 609 log.go:172] (0xc00003ad10) Reply frame received for 3\nI0603 00:08:13.807417 609 log.go:172] (0xc00003ad10) (0xc00049a500) Create stream\nI0603 00:08:13.807433 609 log.go:172] (0xc00003ad10) (0xc00049a500) Stream added, broadcasting: 5\nI0603 00:08:13.808591 609 log.go:172] (0xc00003ad10) Reply frame received for 5\nI0603 00:08:13.860939 609 log.go:172] (0xc00003ad10) Data frame received for 5\nI0603 00:08:13.860981 609 log.go:172] (0xc00049a500) (5) Data frame handling\nI0603 00:08:13.861018 609 log.go:172] (0xc00049a500) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 30846\nI0603 00:08:13.861889 609 log.go:172] (0xc00003ad10) Data frame received for 5\nI0603 00:08:13.861913 609 log.go:172] (0xc00049a500) (5) Data frame handling\nI0603 00:08:13.861947 609 log.go:172] (0xc00049a500) (5) Data frame sent\nConnection to 172.17.0.12 30846 port [tcp/30846] succeeded!\nI0603 00:08:13.862373 609 log.go:172] (0xc00003ad10) Data frame received for 5\nI0603 00:08:13.862417 609 log.go:172] (0xc00049a500) (5) Data frame handling\nI0603 00:08:13.862555 609 log.go:172] (0xc00003ad10) Data frame received for 3\nI0603 00:08:13.862588 609 log.go:172] (0xc0006b2aa0) (3) Data frame handling\nI0603 00:08:13.864401 609 log.go:172] (0xc00003ad10) Data frame received for 1\nI0603 00:08:13.864419 609 log.go:172] (0xc00051adc0) (1) Data frame handling\nI0603 00:08:13.864436 609 log.go:172] (0xc00051adc0) (1) Data frame sent\nI0603 00:08:13.864449 609 log.go:172] (0xc00003ad10) (0xc00051adc0) Stream removed, broadcasting: 1\nI0603 00:08:13.864678 609 log.go:172] (0xc00003ad10) Go away received\nI0603 00:08:13.864793 609 log.go:172] (0xc00003ad10) (0xc00051adc0) Stream removed, broadcasting: 1\nI0603 00:08:13.864810 609 log.go:172] (0xc00003ad10) (0xc0006b2aa0) Stream removed, broadcasting: 3\nI0603 00:08:13.864819 609 log.go:172] (0xc00003ad10) (0xc00049a500) Stream removed, broadcasting: 5\n" Jun 3 00:08:13.869: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:08:13.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4754" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:14.842 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":101,"skipped":1564,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:08:13.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 00:08:13.946: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 00:08:13.971: INFO: Waiting for terminating namespaces to be deleted... Jun 3 00:08:13.999: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jun 3 00:08:14.006: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jun 3 00:08:14.006: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jun 3 00:08:14.006: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jun 3 00:08:14.006: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jun 3 00:08:14.006: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 3 00:08:14.006: INFO: Container kindnet-cni ready: true, restart count 2 Jun 3 00:08:14.006: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 3 00:08:14.006: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 00:08:14.006: INFO: execpod52tmn from services-4754 started at 2020-06-03 00:08:05 +0000 UTC (1 container statuses recorded) Jun 3 00:08:14.006: INFO: Container agnhost-pause ready: true, restart count 0 Jun 3 00:08:14.006: INFO: nodeport-test-4wrql from services-4754 started at 2020-06-03 00:07:59 +0000 UTC (1 container statuses recorded) Jun 3 00:08:14.006: INFO: Container nodeport-test ready: true, restart count 0 Jun 3 00:08:14.006: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jun 3 00:08:14.017: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jun 3 00:08:14.017: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jun 3 00:08:14.017: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jun 3 00:08:14.017: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jun 3 00:08:14.017: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 3 00:08:14.017: INFO: Container kindnet-cni ready: true, restart count 2 Jun 3 00:08:14.017: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 3 00:08:14.017: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 00:08:14.017: INFO: nodeport-test-2g7wm from services-4754 started at 2020-06-03 00:07:59 +0000 UTC (1 container statuses recorded) Jun 3 00:08:14.017: INFO: Container nodeport-test ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Jun 3 00:08:14.080: INFO: Pod rally-c184502e-30nwopzm requesting resource cpu=0m on Node latest-worker Jun 3 00:08:14.080: INFO: Pod terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 requesting resource cpu=0m on Node latest-worker2 Jun 3 00:08:14.080: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker Jun 3 00:08:14.080: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 Jun 3 00:08:14.080: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker Jun 3 00:08:14.080: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 Jun 3 00:08:14.080: INFO: Pod execpod52tmn requesting resource cpu=0m on Node latest-worker Jun 3 00:08:14.080: INFO: Pod nodeport-test-2g7wm requesting resource cpu=0m on Node latest-worker2 Jun 3 00:08:14.080: INFO: Pod nodeport-test-4wrql requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Jun 3 00:08:14.080: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Jun 3 00:08:14.087: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-4438a993-822e-423c-b1a1-6a9805e041a9.1614e008310e7f7c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-303/filler-pod-4438a993-822e-423c-b1a1-6a9805e041a9 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-4438a993-822e-423c-b1a1-6a9805e041a9.1614e008b78cf7eb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-4438a993-822e-423c-b1a1-6a9805e041a9.1614e00906ba9a0e], Reason = [Created], Message = [Created container filler-pod-4438a993-822e-423c-b1a1-6a9805e041a9] STEP: Considering event: Type = [Normal], Name = [filler-pod-4438a993-822e-423c-b1a1-6a9805e041a9.1614e00916d9a311], Reason = [Started], Message = [Started container filler-pod-4438a993-822e-423c-b1a1-6a9805e041a9] STEP: Considering event: Type = [Normal], Name = [filler-pod-700d3b8a-6403-4d5d-8ff8-e1fb350396f3.1614e0082d8aed82], Reason = [Scheduled], Message = [Successfully assigned sched-pred-303/filler-pod-700d3b8a-6403-4d5d-8ff8-e1fb350396f3 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-700d3b8a-6403-4d5d-8ff8-e1fb350396f3.1614e0087c04409b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-700d3b8a-6403-4d5d-8ff8-e1fb350396f3.1614e008f4209632], Reason = [Created], Message = [Created container filler-pod-700d3b8a-6403-4d5d-8ff8-e1fb350396f3] STEP: Considering event: Type = [Normal], Name = [filler-pod-700d3b8a-6403-4d5d-8ff8-e1fb350396f3.1614e0090ca6f16d], Reason = [Started], Message = [Started container filler-pod-700d3b8a-6403-4d5d-8ff8-e1fb350396f3] STEP: Considering event: Type = [Warning], Name = [additional-pod.1614e00999c13ed2], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.1614e0099d7d26b3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:08:21.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-303" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.596 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":102,"skipped":1595,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:08:21.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:08:21.777: INFO: The status of Pod test-webserver-738f9d62-0adc-44b4-9263-82a6df8bce17 is Pending, waiting for it to be Running (with Ready = true) Jun 3 00:08:23.781: INFO: The status of Pod test-webserver-738f9d62-0adc-44b4-9263-82a6df8bce17 is Pending, waiting for it to be Running (with Ready = true) Jun 3 00:08:25.780: INFO: The status of Pod test-webserver-738f9d62-0adc-44b4-9263-82a6df8bce17 is Running (Ready = false) Jun 3 00:08:27.782: INFO: The status of Pod test-webserver-738f9d62-0adc-44b4-9263-82a6df8bce17 is Running (Ready = false) Jun 3 00:08:29.782: INFO: The status of Pod test-webserver-738f9d62-0adc-44b4-9263-82a6df8bce17 is Running (Ready = false) Jun 3 00:08:31.782: INFO: The status of Pod test-webserver-738f9d62-0adc-44b4-9263-82a6df8bce17 is Running (Ready = false) Jun 3 00:08:33.782: INFO: The status of Pod test-webserver-738f9d62-0adc-44b4-9263-82a6df8bce17 is Running (Ready = false) Jun 3 00:08:35.781: INFO: The status of Pod test-webserver-738f9d62-0adc-44b4-9263-82a6df8bce17 is Running (Ready = false) Jun 3 00:08:37.782: INFO: The status of Pod test-webserver-738f9d62-0adc-44b4-9263-82a6df8bce17 is Running (Ready = false) Jun 3 00:08:39.781: INFO: The status of Pod test-webserver-738f9d62-0adc-44b4-9263-82a6df8bce17 is Running (Ready = false) Jun 3 00:08:41.782: INFO: The status of Pod test-webserver-738f9d62-0adc-44b4-9263-82a6df8bce17 is Running (Ready = false) Jun 3 00:08:43.782: INFO: The status of Pod test-webserver-738f9d62-0adc-44b4-9263-82a6df8bce17 is Running (Ready = true) Jun 3 00:08:43.785: INFO: Container started at 2020-06-03 00:08:24 +0000 UTC, pod became ready at 2020-06-03 00:08:42 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:08:43.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6087" for this suite. • [SLOW TEST:22.318 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":103,"skipped":1602,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:08:43.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-895a81b4-af73-444a-97be-f1697f2602f9 STEP: Creating a pod to test consume secrets Jun 3 00:08:44.380: INFO: Waiting up to 5m0s for pod "pod-secrets-84118409-bef1-4a71-bc0e-4684931db340" in namespace "secrets-1667" to be "Succeeded or Failed" Jun 3 00:08:44.412: INFO: Pod "pod-secrets-84118409-bef1-4a71-bc0e-4684931db340": Phase="Pending", Reason="", readiness=false. Elapsed: 32.125518ms Jun 3 00:08:46.454: INFO: Pod "pod-secrets-84118409-bef1-4a71-bc0e-4684931db340": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07389681s Jun 3 00:08:48.458: INFO: Pod "pod-secrets-84118409-bef1-4a71-bc0e-4684931db340": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078373573s STEP: Saw pod success Jun 3 00:08:48.458: INFO: Pod "pod-secrets-84118409-bef1-4a71-bc0e-4684931db340" satisfied condition "Succeeded or Failed" Jun 3 00:08:48.462: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-84118409-bef1-4a71-bc0e-4684931db340 container secret-volume-test: STEP: delete the pod Jun 3 00:08:48.504: INFO: Waiting for pod pod-secrets-84118409-bef1-4a71-bc0e-4684931db340 to disappear Jun 3 00:08:48.514: INFO: Pod pod-secrets-84118409-bef1-4a71-bc0e-4684931db340 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:08:48.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1667" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":104,"skipped":1661,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:08:48.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-c57756d4-83f3-4d43-8b30-bbb3be55f151 STEP: Creating configMap with name cm-test-opt-upd-e7bbb3fc-9c15-4d7a-be77-db2058deae14 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-c57756d4-83f3-4d43-8b30-bbb3be55f151 STEP: Updating configmap cm-test-opt-upd-e7bbb3fc-9c15-4d7a-be77-db2058deae14 STEP: Creating configMap with name cm-test-opt-create-dee4bc7c-0d49-4823-8c42-d6c6f8592df1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:08:56.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1529" for this suite. • [SLOW TEST:8.262 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":105,"skipped":1670,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:08:56.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 3 00:08:56.910: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5101 /api/v1/namespaces/watch-5101/configmaps/e2e-watch-test-resource-version d53ea45c-543d-4eff-818f-ac63fd84b631 9802293 0 2020-06-03 00:08:56 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-06-03 00:08:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 00:08:56.910: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5101 /api/v1/namespaces/watch-5101/configmaps/e2e-watch-test-resource-version d53ea45c-543d-4eff-818f-ac63fd84b631 9802294 0 2020-06-03 00:08:56 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-06-03 00:08:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:08:56.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5101" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":106,"skipped":1681,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:08:57.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:08:58.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-878" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":107,"skipped":1682,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:08:58.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:08:58.466: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jun 3 00:09:00.429: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1340 create -f -' Jun 3 00:09:06.603: INFO: stderr: "" Jun 3 00:09:06.603: INFO: stdout: "e2e-test-crd-publish-openapi-7363-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jun 3 00:09:06.603: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1340 delete e2e-test-crd-publish-openapi-7363-crds test-foo' Jun 3 00:09:06.724: INFO: stderr: "" Jun 3 00:09:06.724: INFO: stdout: "e2e-test-crd-publish-openapi-7363-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jun 3 00:09:06.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1340 apply -f -' Jun 3 00:09:07.523: INFO: stderr: "" Jun 3 00:09:07.523: INFO: stdout: "e2e-test-crd-publish-openapi-7363-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jun 3 00:09:07.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1340 delete e2e-test-crd-publish-openapi-7363-crds test-foo' Jun 3 00:09:07.646: INFO: stderr: "" Jun 3 00:09:07.646: INFO: stdout: "e2e-test-crd-publish-openapi-7363-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jun 3 00:09:07.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1340 create -f -' Jun 3 00:09:07.924: INFO: rc: 1 Jun 3 00:09:07.924: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1340 apply -f -' Jun 3 00:09:08.156: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jun 3 00:09:08.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1340 create -f -' Jun 3 00:09:08.414: INFO: rc: 1 Jun 3 00:09:08.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1340 apply -f -' Jun 3 00:09:08.642: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jun 3 00:09:08.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7363-crds' Jun 3 00:09:09.506: INFO: stderr: "" Jun 3 00:09:09.506: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7363-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jun 3 00:09:09.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7363-crds.metadata' Jun 3 00:09:09.747: INFO: stderr: "" Jun 3 00:09:09.747: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7363-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jun 3 00:09:09.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7363-crds.spec' Jun 3 00:09:10.014: INFO: stderr: "" Jun 3 00:09:10.014: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7363-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jun 3 00:09:10.014: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7363-crds.spec.bars' Jun 3 00:09:10.259: INFO: stderr: "" Jun 3 00:09:10.259: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7363-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jun 3 00:09:10.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7363-crds.spec.bars2' Jun 3 00:09:10.528: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:09:13.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1340" for this suite. • [SLOW TEST:15.086 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":108,"skipped":1682,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:09:13.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 3 00:09:13.524: INFO: Waiting up to 5m0s for pod "pod-b10ff6a3-e160-4316-8991-af98bdfa839e" in namespace "emptydir-7485" to be "Succeeded or Failed" Jun 3 00:09:13.575: INFO: Pod "pod-b10ff6a3-e160-4316-8991-af98bdfa839e": Phase="Pending", Reason="", readiness=false. Elapsed: 50.648587ms Jun 3 00:09:15.579: INFO: Pod "pod-b10ff6a3-e160-4316-8991-af98bdfa839e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055472818s Jun 3 00:09:17.583: INFO: Pod "pod-b10ff6a3-e160-4316-8991-af98bdfa839e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059367496s STEP: Saw pod success Jun 3 00:09:17.583: INFO: Pod "pod-b10ff6a3-e160-4316-8991-af98bdfa839e" satisfied condition "Succeeded or Failed" Jun 3 00:09:17.587: INFO: Trying to get logs from node latest-worker2 pod pod-b10ff6a3-e160-4316-8991-af98bdfa839e container test-container: STEP: delete the pod Jun 3 00:09:17.660: INFO: Waiting for pod pod-b10ff6a3-e160-4316-8991-af98bdfa839e to disappear Jun 3 00:09:17.666: INFO: Pod pod-b10ff6a3-e160-4316-8991-af98bdfa839e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:09:17.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7485" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":109,"skipped":1688,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:09:17.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Jun 3 00:09:17.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' Jun 3 00:09:17.943: INFO: stderr: "" Jun 3 00:09:17.943: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:09:17.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2484" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":110,"skipped":1688,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:09:17.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:09:18.035: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:09:24.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9886" for this suite. • [SLOW TEST:6.367 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":111,"skipped":1766,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:09:24.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 3 00:09:29.565: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:09:30.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1936" for this suite. • [SLOW TEST:6.282 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":112,"skipped":1787,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:09:30.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Jun 3 00:09:30.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6443' Jun 3 00:09:31.679: INFO: stderr: "" Jun 3 00:09:31.679: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 3 00:09:31.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6443' Jun 3 00:09:31.834: INFO: stderr: "" Jun 3 00:09:31.834: INFO: stdout: "update-demo-nautilus-7pq5w update-demo-nautilus-xgqwk " Jun 3 00:09:31.834: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7pq5w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6443' Jun 3 00:09:32.034: INFO: stderr: "" Jun 3 00:09:32.034: INFO: stdout: "" Jun 3 00:09:32.034: INFO: update-demo-nautilus-7pq5w is created but not running Jun 3 00:09:37.034: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6443' Jun 3 00:09:37.138: INFO: stderr: "" Jun 3 00:09:37.138: INFO: stdout: "update-demo-nautilus-7pq5w update-demo-nautilus-xgqwk " Jun 3 00:09:37.138: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7pq5w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6443' Jun 3 00:09:37.233: INFO: stderr: "" Jun 3 00:09:37.233: INFO: stdout: "true" Jun 3 00:09:37.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7pq5w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6443' Jun 3 00:09:37.341: INFO: stderr: "" Jun 3 00:09:37.341: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 3 00:09:37.341: INFO: validating pod update-demo-nautilus-7pq5w Jun 3 00:09:37.352: INFO: got data: { "image": "nautilus.jpg" } Jun 3 00:09:37.352: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 00:09:37.353: INFO: update-demo-nautilus-7pq5w is verified up and running Jun 3 00:09:37.353: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xgqwk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6443' Jun 3 00:09:37.447: INFO: stderr: "" Jun 3 00:09:37.447: INFO: stdout: "true" Jun 3 00:09:37.447: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xgqwk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6443' Jun 3 00:09:37.531: INFO: stderr: "" Jun 3 00:09:37.531: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 3 00:09:37.531: INFO: validating pod update-demo-nautilus-xgqwk Jun 3 00:09:37.539: INFO: got data: { "image": "nautilus.jpg" } Jun 3 00:09:37.539: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 00:09:37.539: INFO: update-demo-nautilus-xgqwk is verified up and running STEP: using delete to clean up resources Jun 3 00:09:37.539: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6443' Jun 3 00:09:37.655: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 00:09:37.655: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 3 00:09:37.655: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6443' Jun 3 00:09:37.767: INFO: stderr: "No resources found in kubectl-6443 namespace.\n" Jun 3 00:09:37.767: INFO: stdout: "" Jun 3 00:09:37.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6443 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 3 00:09:37.873: INFO: stderr: "" Jun 3 00:09:37.873: INFO: stdout: "update-demo-nautilus-7pq5w\nupdate-demo-nautilus-xgqwk\n" Jun 3 00:09:38.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6443' Jun 3 00:09:38.469: INFO: stderr: "No resources found in kubectl-6443 namespace.\n" Jun 3 00:09:38.469: INFO: stdout: "" Jun 3 00:09:38.469: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6443 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 3 00:09:38.708: INFO: stderr: "" Jun 3 00:09:38.708: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:09:38.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6443" for this suite. • [SLOW TEST:8.282 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":113,"skipped":1793,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:09:38.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:09:50.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8698" for this suite. • [SLOW TEST:11.239 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":114,"skipped":1795,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:09:50.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:09:50.214: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-0baaf599-6f9a-407e-858a-dcf3b63cd0a2" in namespace "security-context-test-5799" to be "Succeeded or Failed" Jun 3 00:09:50.218: INFO: Pod "alpine-nnp-false-0baaf599-6f9a-407e-858a-dcf3b63cd0a2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.499854ms Jun 3 00:09:52.222: INFO: Pod "alpine-nnp-false-0baaf599-6f9a-407e-858a-dcf3b63cd0a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007702344s Jun 3 00:09:54.227: INFO: Pod "alpine-nnp-false-0baaf599-6f9a-407e-858a-dcf3b63cd0a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012374475s Jun 3 00:09:54.227: INFO: Pod "alpine-nnp-false-0baaf599-6f9a-407e-858a-dcf3b63cd0a2" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:09:54.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5799" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":115,"skipped":1816,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:09:54.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-d438fc5b-5c90-47ed-bdde-a8101002161b in namespace container-probe-6900 Jun 3 00:09:58.584: INFO: Started pod liveness-d438fc5b-5c90-47ed-bdde-a8101002161b in namespace container-probe-6900 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 00:09:58.587: INFO: Initial restart count of pod liveness-d438fc5b-5c90-47ed-bdde-a8101002161b is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:13:59.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6900" for this suite. • [SLOW TEST:245.056 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":116,"skipped":1833,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:13:59.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-6334/secret-test-0b911804-0783-464a-9612-890d86c96824 STEP: Creating a pod to test consume secrets Jun 3 00:13:59.453: INFO: Waiting up to 5m0s for pod "pod-configmaps-69291acf-483b-48f2-9530-da2d8933c90f" in namespace "secrets-6334" to be "Succeeded or Failed" Jun 3 00:13:59.566: INFO: Pod "pod-configmaps-69291acf-483b-48f2-9530-da2d8933c90f": Phase="Pending", Reason="", readiness=false. Elapsed: 112.538097ms Jun 3 00:14:01.570: INFO: Pod "pod-configmaps-69291acf-483b-48f2-9530-da2d8933c90f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117062282s Jun 3 00:14:03.575: INFO: Pod "pod-configmaps-69291acf-483b-48f2-9530-da2d8933c90f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121679185s STEP: Saw pod success Jun 3 00:14:03.575: INFO: Pod "pod-configmaps-69291acf-483b-48f2-9530-da2d8933c90f" satisfied condition "Succeeded or Failed" Jun 3 00:14:03.579: INFO: Trying to get logs from node latest-worker pod pod-configmaps-69291acf-483b-48f2-9530-da2d8933c90f container env-test: STEP: delete the pod Jun 3 00:14:03.608: INFO: Waiting for pod pod-configmaps-69291acf-483b-48f2-9530-da2d8933c90f to disappear Jun 3 00:14:03.612: INFO: Pod pod-configmaps-69291acf-483b-48f2-9530-da2d8933c90f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:14:03.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6334" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":117,"skipped":1847,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:14:03.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6712 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-6712 I0603 00:14:03.768470 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6712, replica count: 2 I0603 00:14:06.818911 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 00:14:09.819148 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 00:14:09.819: INFO: Creating new exec pod Jun 3 00:14:14.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6712 execpodrn5zv -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jun 3 00:14:15.106: INFO: stderr: "I0603 00:14:14.975469 1195 log.go:172] (0xc000c6b080) (0xc000b36460) Create stream\nI0603 00:14:14.975542 1195 log.go:172] (0xc000c6b080) (0xc000b36460) Stream added, broadcasting: 1\nI0603 00:14:14.980061 1195 log.go:172] (0xc000c6b080) Reply frame received for 1\nI0603 00:14:14.980113 1195 log.go:172] (0xc000c6b080) (0xc0005c06e0) Create stream\nI0603 00:14:14.980129 1195 log.go:172] (0xc000c6b080) (0xc0005c06e0) Stream added, broadcasting: 3\nI0603 00:14:14.981572 1195 log.go:172] (0xc000c6b080) Reply frame received for 3\nI0603 00:14:14.981607 1195 log.go:172] (0xc000c6b080) (0xc0005263c0) Create stream\nI0603 00:14:14.981621 1195 log.go:172] (0xc000c6b080) (0xc0005263c0) Stream added, broadcasting: 5\nI0603 00:14:14.982745 1195 log.go:172] (0xc000c6b080) Reply frame received for 5\nI0603 00:14:15.075808 1195 log.go:172] (0xc000c6b080) Data frame received for 5\nI0603 00:14:15.075835 1195 log.go:172] (0xc0005263c0) (5) Data frame handling\nI0603 00:14:15.075853 1195 log.go:172] (0xc0005263c0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0603 00:14:15.096837 1195 log.go:172] (0xc000c6b080) Data frame received for 5\nI0603 00:14:15.096883 1195 log.go:172] (0xc0005263c0) (5) Data frame handling\nI0603 00:14:15.096921 1195 log.go:172] (0xc0005263c0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0603 00:14:15.097450 1195 log.go:172] (0xc000c6b080) Data frame received for 3\nI0603 00:14:15.097499 1195 log.go:172] (0xc0005c06e0) (3) Data frame handling\nI0603 00:14:15.097534 1195 log.go:172] (0xc000c6b080) Data frame received for 5\nI0603 00:14:15.097561 1195 log.go:172] (0xc0005263c0) (5) Data frame handling\nI0603 00:14:15.099396 1195 log.go:172] (0xc000c6b080) Data frame received for 1\nI0603 00:14:15.099423 1195 log.go:172] (0xc000b36460) (1) Data frame handling\nI0603 00:14:15.099436 1195 log.go:172] (0xc000b36460) (1) Data frame sent\nI0603 00:14:15.099457 1195 log.go:172] (0xc000c6b080) (0xc000b36460) Stream removed, broadcasting: 1\nI0603 00:14:15.099796 1195 log.go:172] (0xc000c6b080) (0xc000b36460) Stream removed, broadcasting: 1\nI0603 00:14:15.099818 1195 log.go:172] (0xc000c6b080) (0xc0005c06e0) Stream removed, broadcasting: 3\nI0603 00:14:15.100010 1195 log.go:172] (0xc000c6b080) (0xc0005263c0) Stream removed, broadcasting: 5\n" Jun 3 00:14:15.106: INFO: stdout: "" Jun 3 00:14:15.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6712 execpodrn5zv -- /bin/sh -x -c nc -zv -t -w 2 10.103.47.41 80' Jun 3 00:14:15.319: INFO: stderr: "I0603 00:14:15.229661 1217 log.go:172] (0xc0009133f0) (0xc000828f00) Create stream\nI0603 00:14:15.229722 1217 log.go:172] (0xc0009133f0) (0xc000828f00) Stream added, broadcasting: 1\nI0603 00:14:15.232145 1217 log.go:172] (0xc0009133f0) Reply frame received for 1\nI0603 00:14:15.232174 1217 log.go:172] (0xc0009133f0) (0xc000700fa0) Create stream\nI0603 00:14:15.232183 1217 log.go:172] (0xc0009133f0) (0xc000700fa0) Stream added, broadcasting: 3\nI0603 00:14:15.232921 1217 log.go:172] (0xc0009133f0) Reply frame received for 3\nI0603 00:14:15.232948 1217 log.go:172] (0xc0009133f0) (0xc000701540) Create stream\nI0603 00:14:15.232957 1217 log.go:172] (0xc0009133f0) (0xc000701540) Stream added, broadcasting: 5\nI0603 00:14:15.233989 1217 log.go:172] (0xc0009133f0) Reply frame received for 5\nI0603 00:14:15.312529 1217 log.go:172] (0xc0009133f0) Data frame received for 3\nI0603 00:14:15.312568 1217 log.go:172] (0xc000700fa0) (3) Data frame handling\nI0603 00:14:15.312598 1217 log.go:172] (0xc0009133f0) Data frame received for 5\nI0603 00:14:15.312606 1217 log.go:172] (0xc000701540) (5) Data frame handling\nI0603 00:14:15.312626 1217 log.go:172] (0xc000701540) (5) Data frame sent\nI0603 00:14:15.312641 1217 log.go:172] (0xc0009133f0) Data frame received for 5\nI0603 00:14:15.312650 1217 log.go:172] (0xc000701540) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.47.41 80\nConnection to 10.103.47.41 80 port [tcp/http] succeeded!\nI0603 00:14:15.313976 1217 log.go:172] (0xc0009133f0) Data frame received for 1\nI0603 00:14:15.314003 1217 log.go:172] (0xc000828f00) (1) Data frame handling\nI0603 00:14:15.314017 1217 log.go:172] (0xc000828f00) (1) Data frame sent\nI0603 00:14:15.314028 1217 log.go:172] (0xc0009133f0) (0xc000828f00) Stream removed, broadcasting: 1\nI0603 00:14:15.314245 1217 log.go:172] (0xc0009133f0) Go away received\nI0603 00:14:15.314286 1217 log.go:172] (0xc0009133f0) (0xc000828f00) Stream removed, broadcasting: 1\nI0603 00:14:15.314310 1217 log.go:172] (0xc0009133f0) (0xc000700fa0) Stream removed, broadcasting: 3\nI0603 00:14:15.314321 1217 log.go:172] (0xc0009133f0) (0xc000701540) Stream removed, broadcasting: 5\n" Jun 3 00:14:15.320: INFO: stdout: "" Jun 3 00:14:15.320: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:14:15.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6712" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:11.846 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":118,"skipped":1848,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:14:15.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 3 00:14:15.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1344' Jun 3 00:14:15.685: INFO: stderr: "" Jun 3 00:14:15.685: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 Jun 3 00:14:15.704: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1344' Jun 3 00:14:24.931: INFO: stderr: "" Jun 3 00:14:24.931: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:14:24.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1344" for this suite. • [SLOW TEST:9.476 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":119,"skipped":1856,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:14:24.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 3 00:14:25.066: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d628afd1-bf85-41e6-81f6-b432906ce6ff" in namespace "projected-6570" to be "Succeeded or Failed" Jun 3 00:14:25.074: INFO: Pod "downwardapi-volume-d628afd1-bf85-41e6-81f6-b432906ce6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 7.400749ms Jun 3 00:14:27.079: INFO: Pod "downwardapi-volume-d628afd1-bf85-41e6-81f6-b432906ce6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012128327s Jun 3 00:14:29.083: INFO: Pod "downwardapi-volume-d628afd1-bf85-41e6-81f6-b432906ce6ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016889756s STEP: Saw pod success Jun 3 00:14:29.083: INFO: Pod "downwardapi-volume-d628afd1-bf85-41e6-81f6-b432906ce6ff" satisfied condition "Succeeded or Failed" Jun 3 00:14:29.086: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d628afd1-bf85-41e6-81f6-b432906ce6ff container client-container: STEP: delete the pod Jun 3 00:14:29.322: INFO: Waiting for pod downwardapi-volume-d628afd1-bf85-41e6-81f6-b432906ce6ff to disappear Jun 3 00:14:29.331: INFO: Pod downwardapi-volume-d628afd1-bf85-41e6-81f6-b432906ce6ff no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:14:29.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6570" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":120,"skipped":1863,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:14:29.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-6f40a664-1bc4-4b34-928a-7c1d9d4b10fe in namespace container-probe-8449 Jun 3 00:14:33.480: INFO: Started pod liveness-6f40a664-1bc4-4b34-928a-7c1d9d4b10fe in namespace container-probe-8449 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 00:14:33.482: INFO: Initial restart count of pod liveness-6f40a664-1bc4-4b34-928a-7c1d9d4b10fe is 0 Jun 3 00:14:57.584: INFO: Restart count of pod container-probe-8449/liveness-6f40a664-1bc4-4b34-928a-7c1d9d4b10fe is now 1 (24.101732887s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:14:57.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8449" for this suite. • [SLOW TEST:28.322 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":121,"skipped":1874,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:14:57.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4775 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 3 00:14:57.717: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 3 00:14:58.092: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 3 00:15:00.105: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 3 00:15:02.129: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:15:04.097: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:15:06.097: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:15:08.097: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:15:10.097: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:15:12.097: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:15:14.097: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:15:16.097: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:15:18.097: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:15:20.097: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 3 00:15:20.102: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 3 00:15:24.121: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.151:8080/dial?request=hostname&protocol=http&host=10.244.1.150&port=8080&tries=1'] Namespace:pod-network-test-4775 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 00:15:24.121: INFO: >>> kubeConfig: /root/.kube/config I0603 00:15:24.156967 7 log.go:172] (0xc001416000) (0xc0021792c0) Create stream I0603 00:15:24.156996 7 log.go:172] (0xc001416000) (0xc0021792c0) Stream added, broadcasting: 1 I0603 00:15:24.159302 7 log.go:172] (0xc001416000) Reply frame received for 1 I0603 00:15:24.159516 7 log.go:172] (0xc001416000) (0xc002179400) Create stream I0603 00:15:24.159547 7 log.go:172] (0xc001416000) (0xc002179400) Stream added, broadcasting: 3 I0603 00:15:24.160578 7 log.go:172] (0xc001416000) Reply frame received for 3 I0603 00:15:24.160596 7 log.go:172] (0xc001416000) (0xc002179540) Create stream I0603 00:15:24.160604 7 log.go:172] (0xc001416000) (0xc002179540) Stream added, broadcasting: 5 I0603 00:15:24.161595 7 log.go:172] (0xc001416000) Reply frame received for 5 I0603 00:15:24.407407 7 log.go:172] (0xc001416000) Data frame received for 3 I0603 00:15:24.407467 7 log.go:172] (0xc002179400) (3) Data frame handling I0603 00:15:24.407520 7 log.go:172] (0xc002179400) (3) Data frame sent I0603 00:15:24.408042 7 log.go:172] (0xc001416000) Data frame received for 3 I0603 00:15:24.408089 7 log.go:172] (0xc002179400) (3) Data frame handling I0603 00:15:24.408290 7 log.go:172] (0xc001416000) Data frame received for 5 I0603 00:15:24.408305 7 log.go:172] (0xc002179540) (5) Data frame handling I0603 00:15:24.411016 7 log.go:172] (0xc001416000) Data frame received for 1 I0603 00:15:24.411031 7 log.go:172] (0xc0021792c0) (1) Data frame handling I0603 00:15:24.411038 7 log.go:172] (0xc0021792c0) (1) Data frame sent I0603 00:15:24.411328 7 log.go:172] (0xc001416000) (0xc0021792c0) Stream removed, broadcasting: 1 I0603 00:15:24.411445 7 log.go:172] (0xc001416000) (0xc0021792c0) Stream removed, broadcasting: 1 I0603 00:15:24.411460 7 log.go:172] (0xc001416000) (0xc002179400) Stream removed, broadcasting: 3 I0603 00:15:24.411475 7 log.go:172] (0xc001416000) Go away received I0603 00:15:24.411514 7 log.go:172] (0xc001416000) (0xc002179540) Stream removed, broadcasting: 5 Jun 3 00:15:24.411: INFO: Waiting for responses: map[] Jun 3 00:15:24.415: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.151:8080/dial?request=hostname&protocol=http&host=10.244.2.101&port=8080&tries=1'] Namespace:pod-network-test-4775 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 00:15:24.415: INFO: >>> kubeConfig: /root/.kube/config I0603 00:15:24.455326 7 log.go:172] (0xc000fa6370) (0xc001d80640) Create stream I0603 00:15:24.455351 7 log.go:172] (0xc000fa6370) (0xc001d80640) Stream added, broadcasting: 1 I0603 00:15:24.457984 7 log.go:172] (0xc000fa6370) Reply frame received for 1 I0603 00:15:24.458009 7 log.go:172] (0xc000fa6370) (0xc002548a00) Create stream I0603 00:15:24.458016 7 log.go:172] (0xc000fa6370) (0xc002548a00) Stream added, broadcasting: 3 I0603 00:15:24.459128 7 log.go:172] (0xc000fa6370) Reply frame received for 3 I0603 00:15:24.459181 7 log.go:172] (0xc000fa6370) (0xc0015e6780) Create stream I0603 00:15:24.459198 7 log.go:172] (0xc000fa6370) (0xc0015e6780) Stream added, broadcasting: 5 I0603 00:15:24.460123 7 log.go:172] (0xc000fa6370) Reply frame received for 5 I0603 00:15:24.518201 7 log.go:172] (0xc000fa6370) Data frame received for 3 I0603 00:15:24.518262 7 log.go:172] (0xc002548a00) (3) Data frame handling I0603 00:15:24.518308 7 log.go:172] (0xc002548a00) (3) Data frame sent I0603 00:15:24.518541 7 log.go:172] (0xc000fa6370) Data frame received for 5 I0603 00:15:24.518571 7 log.go:172] (0xc0015e6780) (5) Data frame handling I0603 00:15:24.518764 7 log.go:172] (0xc000fa6370) Data frame received for 3 I0603 00:15:24.518787 7 log.go:172] (0xc002548a00) (3) Data frame handling I0603 00:15:24.520352 7 log.go:172] (0xc000fa6370) Data frame received for 1 I0603 00:15:24.520368 7 log.go:172] (0xc001d80640) (1) Data frame handling I0603 00:15:24.520377 7 log.go:172] (0xc001d80640) (1) Data frame sent I0603 00:15:24.520388 7 log.go:172] (0xc000fa6370) (0xc001d80640) Stream removed, broadcasting: 1 I0603 00:15:24.520400 7 log.go:172] (0xc000fa6370) Go away received I0603 00:15:24.520553 7 log.go:172] (0xc000fa6370) (0xc001d80640) Stream removed, broadcasting: 1 I0603 00:15:24.520640 7 log.go:172] (0xc000fa6370) (0xc002548a00) Stream removed, broadcasting: 3 I0603 00:15:24.520679 7 log.go:172] (0xc000fa6370) (0xc0015e6780) Stream removed, broadcasting: 5 Jun 3 00:15:24.520: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:15:24.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4775" for this suite. • [SLOW TEST:26.879 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":122,"skipped":1884,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:15:24.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jun 3 00:15:25.497: INFO: Waiting up to 5m0s for pod "downward-api-dd7af7f2-303b-42f8-b618-f57cd837ce42" in namespace "downward-api-1055" to be "Succeeded or Failed" Jun 3 00:15:25.561: INFO: Pod "downward-api-dd7af7f2-303b-42f8-b618-f57cd837ce42": Phase="Pending", Reason="", readiness=false. Elapsed: 63.653834ms Jun 3 00:15:27.564: INFO: Pod "downward-api-dd7af7f2-303b-42f8-b618-f57cd837ce42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066944071s Jun 3 00:15:29.567: INFO: Pod "downward-api-dd7af7f2-303b-42f8-b618-f57cd837ce42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069647004s STEP: Saw pod success Jun 3 00:15:29.567: INFO: Pod "downward-api-dd7af7f2-303b-42f8-b618-f57cd837ce42" satisfied condition "Succeeded or Failed" Jun 3 00:15:29.569: INFO: Trying to get logs from node latest-worker2 pod downward-api-dd7af7f2-303b-42f8-b618-f57cd837ce42 container dapi-container: STEP: delete the pod Jun 3 00:15:29.712: INFO: Waiting for pod downward-api-dd7af7f2-303b-42f8-b618-f57cd837ce42 to disappear Jun 3 00:15:29.787: INFO: Pod downward-api-dd7af7f2-303b-42f8-b618-f57cd837ce42 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:15:29.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1055" for this suite. • [SLOW TEST:5.248 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":123,"skipped":1909,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:15:29.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jun 3 00:15:36.384: INFO: Successfully updated pod "adopt-release-5lcn7" STEP: Checking that the Job readopts the Pod Jun 3 00:15:36.384: INFO: Waiting up to 15m0s for pod "adopt-release-5lcn7" in namespace "job-5290" to be "adopted" Jun 3 00:15:36.393: INFO: Pod "adopt-release-5lcn7": Phase="Running", Reason="", readiness=true. Elapsed: 8.661771ms Jun 3 00:15:38.398: INFO: Pod "adopt-release-5lcn7": Phase="Running", Reason="", readiness=true. Elapsed: 2.013269093s Jun 3 00:15:38.398: INFO: Pod "adopt-release-5lcn7" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jun 3 00:15:38.908: INFO: Successfully updated pod "adopt-release-5lcn7" STEP: Checking that the Job releases the Pod Jun 3 00:15:38.908: INFO: Waiting up to 15m0s for pod "adopt-release-5lcn7" in namespace "job-5290" to be "released" Jun 3 00:15:38.930: INFO: Pod "adopt-release-5lcn7": Phase="Running", Reason="", readiness=true. Elapsed: 22.346643ms Jun 3 00:15:40.943: INFO: Pod "adopt-release-5lcn7": Phase="Running", Reason="", readiness=true. Elapsed: 2.034943446s Jun 3 00:15:40.943: INFO: Pod "adopt-release-5lcn7" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:15:40.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5290" for this suite. • [SLOW TEST:11.155 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":124,"skipped":1944,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:15:40.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 3 00:15:45.647: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f0a34972-85b8-4e6d-b6b9-e0db5070d42d" Jun 3 00:15:45.647: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f0a34972-85b8-4e6d-b6b9-e0db5070d42d" in namespace "pods-9902" to be "terminated due to deadline exceeded" Jun 3 00:15:45.679: INFO: Pod "pod-update-activedeadlineseconds-f0a34972-85b8-4e6d-b6b9-e0db5070d42d": Phase="Running", Reason="", readiness=true. Elapsed: 32.724011ms Jun 3 00:15:47.684: INFO: Pod "pod-update-activedeadlineseconds-f0a34972-85b8-4e6d-b6b9-e0db5070d42d": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.037204978s Jun 3 00:15:47.684: INFO: Pod "pod-update-activedeadlineseconds-f0a34972-85b8-4e6d-b6b9-e0db5070d42d" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:15:47.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9902" for this suite. • [SLOW TEST:6.743 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":125,"skipped":1952,"failed":0} [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:15:47.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:16:03.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7867" for this suite. • [SLOW TEST:16.120 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":126,"skipped":1952,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:16:03.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-9597/configmap-test-99eeda2a-e624-4731-ba70-21ba6ec06a18 STEP: Creating a pod to test consume configMaps Jun 3 00:16:03.880: INFO: Waiting up to 5m0s for pod "pod-configmaps-89a691fb-30a3-4c51-8f8d-690ceb145ee4" in namespace "configmap-9597" to be "Succeeded or Failed" Jun 3 00:16:03.923: INFO: Pod "pod-configmaps-89a691fb-30a3-4c51-8f8d-690ceb145ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 42.606879ms Jun 3 00:16:05.985: INFO: Pod "pod-configmaps-89a691fb-30a3-4c51-8f8d-690ceb145ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104999861s Jun 3 00:16:07.990: INFO: Pod "pod-configmaps-89a691fb-30a3-4c51-8f8d-690ceb145ee4": Phase="Running", Reason="", readiness=true. Elapsed: 4.109668025s Jun 3 00:16:09.994: INFO: Pod "pod-configmaps-89a691fb-30a3-4c51-8f8d-690ceb145ee4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.113940762s STEP: Saw pod success Jun 3 00:16:09.994: INFO: Pod "pod-configmaps-89a691fb-30a3-4c51-8f8d-690ceb145ee4" satisfied condition "Succeeded or Failed" Jun 3 00:16:09.998: INFO: Trying to get logs from node latest-worker pod pod-configmaps-89a691fb-30a3-4c51-8f8d-690ceb145ee4 container env-test: STEP: delete the pod Jun 3 00:16:10.065: INFO: Waiting for pod pod-configmaps-89a691fb-30a3-4c51-8f8d-690ceb145ee4 to disappear Jun 3 00:16:10.122: INFO: Pod pod-configmaps-89a691fb-30a3-4c51-8f8d-690ceb145ee4 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:16:10.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9597" for this suite. • [SLOW TEST:6.317 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":127,"skipped":1984,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:16:10.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:16:10.223: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' Jun 3 00:16:10.403: INFO: stderr: "" Jun 3 00:16:10.403: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.3.35+3416442e4b7eeb\", GitCommit:\"3416442e4b7eebfce360f5b7468c6818d3e882f8\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:24Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:16:10.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4345" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":128,"skipped":1987,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:16:10.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 00:16:10.922: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 00:16:13.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740170, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740170, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740171, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740170, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 00:16:16.107: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jun 3 00:16:16.128: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:16:16.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3365" for this suite. STEP: Destroying namespace "webhook-3365-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.942 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":129,"skipped":1997,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:16:16.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jun 3 00:16:16.432: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:16:25.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-286" for this suite. • [SLOW TEST:9.004 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":130,"skipped":2022,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:16:25.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-599a9074-d14d-4815-a72f-13806b37ba23 Jun 3 00:16:25.478: INFO: Pod name my-hostname-basic-599a9074-d14d-4815-a72f-13806b37ba23: Found 0 pods out of 1 Jun 3 00:16:30.482: INFO: Pod name my-hostname-basic-599a9074-d14d-4815-a72f-13806b37ba23: Found 1 pods out of 1 Jun 3 00:16:30.482: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-599a9074-d14d-4815-a72f-13806b37ba23" are running Jun 3 00:16:30.484: INFO: Pod "my-hostname-basic-599a9074-d14d-4815-a72f-13806b37ba23-w7v96" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 00:16:25 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 00:16:29 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 00:16:29 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 00:16:25 +0000 UTC Reason: Message:}]) Jun 3 00:16:30.485: INFO: Trying to dial the pod Jun 3 00:16:35.499: INFO: Controller my-hostname-basic-599a9074-d14d-4815-a72f-13806b37ba23: Got expected result from replica 1 [my-hostname-basic-599a9074-d14d-4815-a72f-13806b37ba23-w7v96]: "my-hostname-basic-599a9074-d14d-4815-a72f-13806b37ba23-w7v96", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:16:35.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6297" for this suite. • [SLOW TEST:10.146 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":131,"skipped":2046,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:16:35.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 3 00:16:35.585: INFO: Waiting up to 5m0s for pod "pod-91fe125d-6f74-45cc-bd73-bc8c832018ee" in namespace "emptydir-8216" to be "Succeeded or Failed" Jun 3 00:16:35.599: INFO: Pod "pod-91fe125d-6f74-45cc-bd73-bc8c832018ee": Phase="Pending", Reason="", readiness=false. Elapsed: 14.35585ms Jun 3 00:16:37.650: INFO: Pod "pod-91fe125d-6f74-45cc-bd73-bc8c832018ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065038836s Jun 3 00:16:39.655: INFO: Pod "pod-91fe125d-6f74-45cc-bd73-bc8c832018ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069617651s STEP: Saw pod success Jun 3 00:16:39.655: INFO: Pod "pod-91fe125d-6f74-45cc-bd73-bc8c832018ee" satisfied condition "Succeeded or Failed" Jun 3 00:16:39.659: INFO: Trying to get logs from node latest-worker pod pod-91fe125d-6f74-45cc-bd73-bc8c832018ee container test-container: STEP: delete the pod Jun 3 00:16:39.735: INFO: Waiting for pod pod-91fe125d-6f74-45cc-bd73-bc8c832018ee to disappear Jun 3 00:16:39.738: INFO: Pod pod-91fe125d-6f74-45cc-bd73-bc8c832018ee no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:16:39.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8216" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":132,"skipped":2083,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:16:39.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:16:39.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3790" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":133,"skipped":2098,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:16:39.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 00:16:40.780: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 00:16:42.797: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740200, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740200, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740200, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740200, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 00:16:45.837: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:16:46.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7012" for this suite. STEP: Destroying namespace "webhook-7012-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.360 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":134,"skipped":2114,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:16:46.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 3 00:16:46.545: INFO: Waiting up to 5m0s for pod "pod-fcf1e59c-7d9a-40d0-a3bf-bd3283a16124" in namespace "emptydir-3457" to be "Succeeded or Failed" Jun 3 00:16:46.606: INFO: Pod "pod-fcf1e59c-7d9a-40d0-a3bf-bd3283a16124": Phase="Pending", Reason="", readiness=false. Elapsed: 61.234643ms Jun 3 00:16:48.610: INFO: Pod "pod-fcf1e59c-7d9a-40d0-a3bf-bd3283a16124": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065287413s Jun 3 00:16:50.614: INFO: Pod "pod-fcf1e59c-7d9a-40d0-a3bf-bd3283a16124": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068996326s STEP: Saw pod success Jun 3 00:16:50.614: INFO: Pod "pod-fcf1e59c-7d9a-40d0-a3bf-bd3283a16124" satisfied condition "Succeeded or Failed" Jun 3 00:16:50.617: INFO: Trying to get logs from node latest-worker2 pod pod-fcf1e59c-7d9a-40d0-a3bf-bd3283a16124 container test-container: STEP: delete the pod Jun 3 00:16:50.676: INFO: Waiting for pod pod-fcf1e59c-7d9a-40d0-a3bf-bd3283a16124 to disappear Jun 3 00:16:50.686: INFO: Pod pod-fcf1e59c-7d9a-40d0-a3bf-bd3283a16124 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:16:50.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3457" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":135,"skipped":2129,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:16:50.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-9817 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 3 00:16:50.967: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 3 00:16:51.072: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 3 00:16:53.189: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 3 00:16:55.111: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:16:57.077: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:16:59.077: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:17:01.077: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:17:03.077: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:17:05.077: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:17:07.077: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:17:09.077: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:17:11.077: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 3 00:17:11.084: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 3 00:17:13.088: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 3 00:17:17.124: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.110:8080/dial?request=hostname&protocol=udp&host=10.244.1.163&port=8081&tries=1'] Namespace:pod-network-test-9817 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 00:17:17.124: INFO: >>> kubeConfig: /root/.kube/config I0603 00:17:17.156464 7 log.go:172] (0xc001b18000) (0xc001d10820) Create stream I0603 00:17:17.156495 7 log.go:172] (0xc001b18000) (0xc001d10820) Stream added, broadcasting: 1 I0603 00:17:17.158904 7 log.go:172] (0xc001b18000) Reply frame received for 1 I0603 00:17:17.158956 7 log.go:172] (0xc001b18000) (0xc001d10aa0) Create stream I0603 00:17:17.158984 7 log.go:172] (0xc001b18000) (0xc001d10aa0) Stream added, broadcasting: 3 I0603 00:17:17.160103 7 log.go:172] (0xc001b18000) Reply frame received for 3 I0603 00:17:17.160156 7 log.go:172] (0xc001b18000) (0xc001950000) Create stream I0603 00:17:17.160182 7 log.go:172] (0xc001b18000) (0xc001950000) Stream added, broadcasting: 5 I0603 00:17:17.161541 7 log.go:172] (0xc001b18000) Reply frame received for 5 I0603 00:17:17.371624 7 log.go:172] (0xc001b18000) Data frame received for 3 I0603 00:17:17.371648 7 log.go:172] (0xc001d10aa0) (3) Data frame handling I0603 00:17:17.371662 7 log.go:172] (0xc001d10aa0) (3) Data frame sent I0603 00:17:17.372397 7 log.go:172] (0xc001b18000) Data frame received for 3 I0603 00:17:17.372414 7 log.go:172] (0xc001d10aa0) (3) Data frame handling I0603 00:17:17.372540 7 log.go:172] (0xc001b18000) Data frame received for 5 I0603 00:17:17.372550 7 log.go:172] (0xc001950000) (5) Data frame handling I0603 00:17:17.374639 7 log.go:172] (0xc001b18000) Data frame received for 1 I0603 00:17:17.374653 7 log.go:172] (0xc001d10820) (1) Data frame handling I0603 00:17:17.374668 7 log.go:172] (0xc001d10820) (1) Data frame sent I0603 00:17:17.374680 7 log.go:172] (0xc001b18000) (0xc001d10820) Stream removed, broadcasting: 1 I0603 00:17:17.374728 7 log.go:172] (0xc001b18000) Go away received I0603 00:17:17.374857 7 log.go:172] (0xc001b18000) (0xc001d10820) Stream removed, broadcasting: 1 I0603 00:17:17.374884 7 log.go:172] (0xc001b18000) (0xc001d10aa0) Stream removed, broadcasting: 3 I0603 00:17:17.374904 7 log.go:172] (0xc001b18000) (0xc001950000) Stream removed, broadcasting: 5 Jun 3 00:17:17.374: INFO: Waiting for responses: map[] Jun 3 00:17:17.378: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.110:8080/dial?request=hostname&protocol=udp&host=10.244.2.109&port=8081&tries=1'] Namespace:pod-network-test-9817 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 00:17:17.378: INFO: >>> kubeConfig: /root/.kube/config I0603 00:17:17.404205 7 log.go:172] (0xc000fa62c0) (0xc001950960) Create stream I0603 00:17:17.404230 7 log.go:172] (0xc000fa62c0) (0xc001950960) Stream added, broadcasting: 1 I0603 00:17:17.405918 7 log.go:172] (0xc000fa62c0) Reply frame received for 1 I0603 00:17:17.405962 7 log.go:172] (0xc000fa62c0) (0xc001950a00) Create stream I0603 00:17:17.405980 7 log.go:172] (0xc000fa62c0) (0xc001950a00) Stream added, broadcasting: 3 I0603 00:17:17.406723 7 log.go:172] (0xc000fa62c0) Reply frame received for 3 I0603 00:17:17.406748 7 log.go:172] (0xc000fa62c0) (0xc001950aa0) Create stream I0603 00:17:17.406758 7 log.go:172] (0xc000fa62c0) (0xc001950aa0) Stream added, broadcasting: 5 I0603 00:17:17.407739 7 log.go:172] (0xc000fa62c0) Reply frame received for 5 I0603 00:17:17.544898 7 log.go:172] (0xc000fa62c0) Data frame received for 3 I0603 00:17:17.544919 7 log.go:172] (0xc001950a00) (3) Data frame handling I0603 00:17:17.544936 7 log.go:172] (0xc001950a00) (3) Data frame sent I0603 00:17:17.545768 7 log.go:172] (0xc000fa62c0) Data frame received for 5 I0603 00:17:17.545792 7 log.go:172] (0xc001950aa0) (5) Data frame handling I0603 00:17:17.545810 7 log.go:172] (0xc000fa62c0) Data frame received for 3 I0603 00:17:17.545819 7 log.go:172] (0xc001950a00) (3) Data frame handling I0603 00:17:17.547295 7 log.go:172] (0xc000fa62c0) Data frame received for 1 I0603 00:17:17.547371 7 log.go:172] (0xc001950960) (1) Data frame handling I0603 00:17:17.547426 7 log.go:172] (0xc001950960) (1) Data frame sent I0603 00:17:17.547454 7 log.go:172] (0xc000fa62c0) (0xc001950960) Stream removed, broadcasting: 1 I0603 00:17:17.547479 7 log.go:172] (0xc000fa62c0) Go away received I0603 00:17:17.547641 7 log.go:172] (0xc000fa62c0) (0xc001950960) Stream removed, broadcasting: 1 I0603 00:17:17.547674 7 log.go:172] (0xc000fa62c0) (0xc001950a00) Stream removed, broadcasting: 3 I0603 00:17:17.547698 7 log.go:172] (0xc000fa62c0) (0xc001950aa0) Stream removed, broadcasting: 5 Jun 3 00:17:17.547: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:17:17.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9817" for this suite. • [SLOW TEST:26.829 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":136,"skipped":2131,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:17:17.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jun 3 00:17:22.175: INFO: Successfully updated pod "annotationupdate611be7b9-bd80-4de4-a888-cf087b807442" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:17:24.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4558" for this suite. • [SLOW TEST:6.642 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":137,"skipped":2136,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:17:24.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Jun 3 00:17:24.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1681' Jun 3 00:17:24.921: INFO: stderr: "" Jun 3 00:17:24.921: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 3 00:17:25.926: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 00:17:25.926: INFO: Found 0 / 1 Jun 3 00:17:26.926: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 00:17:26.926: INFO: Found 0 / 1 Jun 3 00:17:27.925: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 00:17:27.925: INFO: Found 0 / 1 Jun 3 00:17:28.926: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 00:17:28.926: INFO: Found 1 / 1 Jun 3 00:17:28.926: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 3 00:17:28.929: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 00:17:28.929: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 3 00:17:28.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-sgvbg --namespace=kubectl-1681 -p {"metadata":{"annotations":{"x":"y"}}}' Jun 3 00:17:29.046: INFO: stderr: "" Jun 3 00:17:29.046: INFO: stdout: "pod/agnhost-master-sgvbg patched\n" STEP: checking annotations Jun 3 00:17:29.106: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 00:17:29.106: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:17:29.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1681" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":138,"skipped":2165,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:17:29.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 3 00:17:29.204: INFO: Waiting up to 5m0s for pod "downwardapi-volume-918085b7-2025-4554-91ed-653867020a9a" in namespace "projected-2355" to be "Succeeded or Failed" Jun 3 00:17:29.225: INFO: Pod "downwardapi-volume-918085b7-2025-4554-91ed-653867020a9a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.823037ms Jun 3 00:17:31.286: INFO: Pod "downwardapi-volume-918085b7-2025-4554-91ed-653867020a9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081760797s Jun 3 00:17:33.290: INFO: Pod "downwardapi-volume-918085b7-2025-4554-91ed-653867020a9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085659756s STEP: Saw pod success Jun 3 00:17:33.290: INFO: Pod "downwardapi-volume-918085b7-2025-4554-91ed-653867020a9a" satisfied condition "Succeeded or Failed" Jun 3 00:17:33.292: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-918085b7-2025-4554-91ed-653867020a9a container client-container: STEP: delete the pod Jun 3 00:17:33.373: INFO: Waiting for pod downwardapi-volume-918085b7-2025-4554-91ed-653867020a9a to disappear Jun 3 00:17:33.398: INFO: Pod downwardapi-volume-918085b7-2025-4554-91ed-653867020a9a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:17:33.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2355" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":139,"skipped":2186,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:17:33.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:17:33.610: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 3 00:17:33.670: INFO: Number of nodes with available pods: 0 Jun 3 00:17:33.670: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 3 00:17:33.751: INFO: Number of nodes with available pods: 0 Jun 3 00:17:33.751: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:17:34.755: INFO: Number of nodes with available pods: 0 Jun 3 00:17:34.755: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:17:35.755: INFO: Number of nodes with available pods: 0 Jun 3 00:17:35.755: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:17:36.758: INFO: Number of nodes with available pods: 0 Jun 3 00:17:36.758: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:17:37.756: INFO: Number of nodes with available pods: 1 Jun 3 00:17:37.756: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 3 00:17:37.818: INFO: Number of nodes with available pods: 1 Jun 3 00:17:37.818: INFO: Number of running nodes: 0, number of available pods: 1 Jun 3 00:17:38.823: INFO: Number of nodes with available pods: 0 Jun 3 00:17:38.823: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 3 00:17:38.884: INFO: Number of nodes with available pods: 0 Jun 3 00:17:38.884: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:17:39.888: INFO: Number of nodes with available pods: 0 Jun 3 00:17:39.889: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:17:40.890: INFO: Number of nodes with available pods: 0 Jun 3 00:17:40.890: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:17:41.889: INFO: Number of nodes with available pods: 0 Jun 3 00:17:41.889: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:17:42.888: INFO: Number of nodes with available pods: 0 Jun 3 00:17:42.888: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:17:43.887: INFO: Number of nodes with available pods: 0 Jun 3 00:17:43.887: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:17:44.888: INFO: Number of nodes with available pods: 1 Jun 3 00:17:44.888: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6382, will wait for the garbage collector to delete the pods Jun 3 00:17:44.951: INFO: Deleting DaemonSet.extensions daemon-set took: 6.37159ms Jun 3 00:17:45.252: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.296391ms Jun 3 00:17:54.955: INFO: Number of nodes with available pods: 0 Jun 3 00:17:54.955: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 00:17:54.958: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6382/daemonsets","resourceVersion":"9805125"},"items":null} Jun 3 00:17:54.960: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6382/pods","resourceVersion":"9805125"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:17:54.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6382" for this suite. • [SLOW TEST:21.595 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":140,"skipped":2197,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:17:55.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:17:55.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6801" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":141,"skipped":2217,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:17:55.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Jun 3 00:17:55.181: INFO: Waiting up to 5m0s for pod "client-containers-d52a2ba2-b6a2-4418-b7aa-e354b5e59758" in namespace "containers-657" to be "Succeeded or Failed" Jun 3 00:17:55.244: INFO: Pod "client-containers-d52a2ba2-b6a2-4418-b7aa-e354b5e59758": Phase="Pending", Reason="", readiness=false. Elapsed: 62.695143ms Jun 3 00:17:57.248: INFO: Pod "client-containers-d52a2ba2-b6a2-4418-b7aa-e354b5e59758": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066523602s Jun 3 00:17:59.252: INFO: Pod "client-containers-d52a2ba2-b6a2-4418-b7aa-e354b5e59758": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070911848s STEP: Saw pod success Jun 3 00:17:59.252: INFO: Pod "client-containers-d52a2ba2-b6a2-4418-b7aa-e354b5e59758" satisfied condition "Succeeded or Failed" Jun 3 00:17:59.255: INFO: Trying to get logs from node latest-worker2 pod client-containers-d52a2ba2-b6a2-4418-b7aa-e354b5e59758 container test-container: STEP: delete the pod Jun 3 00:17:59.335: INFO: Waiting for pod client-containers-d52a2ba2-b6a2-4418-b7aa-e354b5e59758 to disappear Jun 3 00:17:59.352: INFO: Pod client-containers-d52a2ba2-b6a2-4418-b7aa-e354b5e59758 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:17:59.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-657" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":142,"skipped":2231,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:17:59.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-1a5b559e-0fec-46ef-9590-430fc5c0ab7c STEP: Creating a pod to test consume secrets Jun 3 00:17:59.466: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-311c93dd-12eb-4083-ba32-88507038f852" in namespace "projected-1631" to be "Succeeded or Failed" Jun 3 00:17:59.486: INFO: Pod "pod-projected-secrets-311c93dd-12eb-4083-ba32-88507038f852": Phase="Pending", Reason="", readiness=false. Elapsed: 19.724976ms Jun 3 00:18:01.490: INFO: Pod "pod-projected-secrets-311c93dd-12eb-4083-ba32-88507038f852": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024056089s Jun 3 00:18:03.495: INFO: Pod "pod-projected-secrets-311c93dd-12eb-4083-ba32-88507038f852": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028853035s STEP: Saw pod success Jun 3 00:18:03.495: INFO: Pod "pod-projected-secrets-311c93dd-12eb-4083-ba32-88507038f852" satisfied condition "Succeeded or Failed" Jun 3 00:18:03.498: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-311c93dd-12eb-4083-ba32-88507038f852 container projected-secret-volume-test: STEP: delete the pod Jun 3 00:18:03.550: INFO: Waiting for pod pod-projected-secrets-311c93dd-12eb-4083-ba32-88507038f852 to disappear Jun 3 00:18:03.556: INFO: Pod pod-projected-secrets-311c93dd-12eb-4083-ba32-88507038f852 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:18:03.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1631" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":143,"skipped":2238,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:18:03.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-1998 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1998 STEP: creating replication controller externalsvc in namespace services-1998 I0603 00:18:03.880719 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1998, replica count: 2 I0603 00:18:06.931119 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 00:18:09.931364 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jun 3 00:18:09.982: INFO: Creating new exec pod Jun 3 00:18:14.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1998 execpodft6gb -- /bin/sh -x -c nslookup nodeport-service' Jun 3 00:18:14.384: INFO: stderr: "I0603 00:18:14.149261 1339 log.go:172] (0xc000c0d600) (0xc0007dde00) Create stream\nI0603 00:18:14.149332 1339 log.go:172] (0xc000c0d600) (0xc0007dde00) Stream added, broadcasting: 1\nI0603 00:18:14.153773 1339 log.go:172] (0xc000c0d600) Reply frame received for 1\nI0603 00:18:14.153812 1339 log.go:172] (0xc000c0d600) (0xc000710960) Create stream\nI0603 00:18:14.153822 1339 log.go:172] (0xc000c0d600) (0xc000710960) Stream added, broadcasting: 3\nI0603 00:18:14.154884 1339 log.go:172] (0xc000c0d600) Reply frame received for 3\nI0603 00:18:14.154923 1339 log.go:172] (0xc000c0d600) (0xc0006f9b80) Create stream\nI0603 00:18:14.154938 1339 log.go:172] (0xc000c0d600) (0xc0006f9b80) Stream added, broadcasting: 5\nI0603 00:18:14.155793 1339 log.go:172] (0xc000c0d600) Reply frame received for 5\nI0603 00:18:14.242910 1339 log.go:172] (0xc000c0d600) Data frame received for 5\nI0603 00:18:14.242937 1339 log.go:172] (0xc0006f9b80) (5) Data frame handling\nI0603 00:18:14.242956 1339 log.go:172] (0xc0006f9b80) (5) Data frame sent\n+ nslookup nodeport-service\nI0603 00:18:14.374023 1339 log.go:172] (0xc000c0d600) Data frame received for 3\nI0603 00:18:14.374049 1339 log.go:172] (0xc000710960) (3) Data frame handling\nI0603 00:18:14.374150 1339 log.go:172] (0xc000710960) (3) Data frame sent\nI0603 00:18:14.375257 1339 log.go:172] (0xc000c0d600) Data frame received for 3\nI0603 00:18:14.375284 1339 log.go:172] (0xc000710960) (3) Data frame handling\nI0603 00:18:14.375304 1339 log.go:172] (0xc000710960) (3) Data frame sent\nI0603 00:18:14.375994 1339 log.go:172] (0xc000c0d600) Data frame received for 3\nI0603 00:18:14.376021 1339 log.go:172] (0xc000710960) (3) Data frame handling\nI0603 00:18:14.376047 1339 log.go:172] (0xc000c0d600) Data frame received for 5\nI0603 00:18:14.376090 1339 log.go:172] (0xc0006f9b80) (5) Data frame handling\nI0603 00:18:14.377814 1339 log.go:172] (0xc000c0d600) Data frame received for 1\nI0603 00:18:14.377834 1339 log.go:172] (0xc0007dde00) (1) Data frame handling\nI0603 00:18:14.377845 1339 log.go:172] (0xc0007dde00) (1) Data frame sent\nI0603 00:18:14.377859 1339 log.go:172] (0xc000c0d600) (0xc0007dde00) Stream removed, broadcasting: 1\nI0603 00:18:14.377908 1339 log.go:172] (0xc000c0d600) Go away received\nI0603 00:18:14.378226 1339 log.go:172] (0xc000c0d600) (0xc0007dde00) Stream removed, broadcasting: 1\nI0603 00:18:14.378253 1339 log.go:172] (0xc000c0d600) (0xc000710960) Stream removed, broadcasting: 3\nI0603 00:18:14.378264 1339 log.go:172] (0xc000c0d600) (0xc0006f9b80) Stream removed, broadcasting: 5\n" Jun 3 00:18:14.384: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-1998.svc.cluster.local\tcanonical name = externalsvc.services-1998.svc.cluster.local.\nName:\texternalsvc.services-1998.svc.cluster.local\nAddress: 10.100.230.186\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1998, will wait for the garbage collector to delete the pods Jun 3 00:18:14.463: INFO: Deleting ReplicationController externalsvc took: 7.789768ms Jun 3 00:18:14.764: INFO: Terminating ReplicationController externalsvc pods took: 300.255659ms Jun 3 00:18:19.707: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:18:19.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1998" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:16.172 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":144,"skipped":2254,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:18:19.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:18:19.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-176" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":145,"skipped":2260,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:18:19.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 3 00:18:20.031: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:18:20.082: INFO: Number of nodes with available pods: 0 Jun 3 00:18:20.082: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:18:21.087: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:18:21.091: INFO: Number of nodes with available pods: 0 Jun 3 00:18:21.091: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:18:22.807: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:18:22.815: INFO: Number of nodes with available pods: 0 Jun 3 00:18:22.815: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:18:23.086: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:18:23.304: INFO: Number of nodes with available pods: 0 Jun 3 00:18:23.304: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:18:24.222: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:18:24.226: INFO: Number of nodes with available pods: 0 Jun 3 00:18:24.226: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:18:25.107: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:18:25.111: INFO: Number of nodes with available pods: 1 Jun 3 00:18:25.111: INFO: Node latest-worker2 is running more than one daemon pod Jun 3 00:18:26.087: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:18:26.090: INFO: Number of nodes with available pods: 2 Jun 3 00:18:26.090: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 3 00:18:26.155: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:18:26.167: INFO: Number of nodes with available pods: 2 Jun 3 00:18:26.167: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-665, will wait for the garbage collector to delete the pods Jun 3 00:18:27.239: INFO: Deleting DaemonSet.extensions daemon-set took: 6.427476ms Jun 3 00:18:27.339: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.257212ms Jun 3 00:18:35.399: INFO: Number of nodes with available pods: 0 Jun 3 00:18:35.399: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 00:18:35.402: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-665/daemonsets","resourceVersion":"9805491"},"items":null} Jun 3 00:18:35.404: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-665/pods","resourceVersion":"9805491"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:18:35.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-665" for this suite. • [SLOW TEST:15.495 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":146,"skipped":2268,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:18:35.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-75dde789-864b-47fc-b436-e79cb07ce7e2 STEP: Creating a pod to test consume secrets Jun 3 00:18:35.588: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-abf8166b-4565-42bf-9549-02d8070a6ea6" in namespace "projected-665" to be "Succeeded or Failed" Jun 3 00:18:35.606: INFO: Pod "pod-projected-secrets-abf8166b-4565-42bf-9549-02d8070a6ea6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.855361ms Jun 3 00:18:37.610: INFO: Pod "pod-projected-secrets-abf8166b-4565-42bf-9549-02d8070a6ea6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021845406s Jun 3 00:18:39.613: INFO: Pod "pod-projected-secrets-abf8166b-4565-42bf-9549-02d8070a6ea6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025479267s STEP: Saw pod success Jun 3 00:18:39.613: INFO: Pod "pod-projected-secrets-abf8166b-4565-42bf-9549-02d8070a6ea6" satisfied condition "Succeeded or Failed" Jun 3 00:18:39.616: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-abf8166b-4565-42bf-9549-02d8070a6ea6 container projected-secret-volume-test: STEP: delete the pod Jun 3 00:18:39.809: INFO: Waiting for pod pod-projected-secrets-abf8166b-4565-42bf-9549-02d8070a6ea6 to disappear Jun 3 00:18:39.819: INFO: Pod pod-projected-secrets-abf8166b-4565-42bf-9549-02d8070a6ea6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:18:39.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-665" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":147,"skipped":2280,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:18:39.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Jun 3 00:18:39.884: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:18:39.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7751" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":148,"skipped":2288,"failed":0} SSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:18:39.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jun 3 00:18:40.073: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Jun 3 00:18:40.706: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jun 3 00:18:43.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740320, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740320, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740320, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740320, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 00:18:45.136: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740320, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740320, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740320, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740320, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 00:18:47.750: INFO: Waited 624.261991ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:18:48.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6980" for this suite. • [SLOW TEST:8.373 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":149,"skipped":2291,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:18:48.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9868 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-9868 Jun 3 00:18:48.866: INFO: Found 0 stateful pods, waiting for 1 Jun 3 00:18:58.870: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 3 00:18:58.906: INFO: Deleting all statefulset in ns statefulset-9868 Jun 3 00:18:58.959: INFO: Scaling statefulset ss to 0 Jun 3 00:19:09.012: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 00:19:09.015: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:19:09.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9868" for this suite. • [SLOW TEST:20.681 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":150,"skipped":2296,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:19:09.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0603 00:19:10.178010 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 3 00:19:10.178: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:19:10.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1702" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":151,"skipped":2307,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:19:10.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3180 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Jun 3 00:19:10.531: INFO: Found 0 stateful pods, waiting for 3 Jun 3 00:19:20.604: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 00:19:20.605: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 00:19:20.605: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Jun 3 00:19:30.537: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 00:19:30.537: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 00:19:30.537: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 3 00:19:30.548: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3180 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 3 00:19:33.508: INFO: stderr: "I0603 00:19:33.372111 1382 log.go:172] (0xc00075e0b0) (0xc000732640) Create stream\nI0603 00:19:33.372150 1382 log.go:172] (0xc00075e0b0) (0xc000732640) Stream added, broadcasting: 1\nI0603 00:19:33.374710 1382 log.go:172] (0xc00075e0b0) Reply frame received for 1\nI0603 00:19:33.374750 1382 log.go:172] (0xc00075e0b0) (0xc000732f00) Create stream\nI0603 00:19:33.374761 1382 log.go:172] (0xc00075e0b0) (0xc000732f00) Stream added, broadcasting: 3\nI0603 00:19:33.375529 1382 log.go:172] (0xc00075e0b0) Reply frame received for 3\nI0603 00:19:33.375549 1382 log.go:172] (0xc00075e0b0) (0xc00069a320) Create stream\nI0603 00:19:33.375556 1382 log.go:172] (0xc00075e0b0) (0xc00069a320) Stream added, broadcasting: 5\nI0603 00:19:33.376336 1382 log.go:172] (0xc00075e0b0) Reply frame received for 5\nI0603 00:19:33.469777 1382 log.go:172] (0xc00075e0b0) Data frame received for 5\nI0603 00:19:33.469800 1382 log.go:172] (0xc00069a320) (5) Data frame handling\nI0603 00:19:33.469818 1382 log.go:172] (0xc00069a320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0603 00:19:33.497947 1382 log.go:172] (0xc00075e0b0) Data frame received for 3\nI0603 00:19:33.497990 1382 log.go:172] (0xc00075e0b0) Data frame received for 5\nI0603 00:19:33.498018 1382 log.go:172] (0xc00069a320) (5) Data frame handling\nI0603 00:19:33.498072 1382 log.go:172] (0xc000732f00) (3) Data frame handling\nI0603 00:19:33.498128 1382 log.go:172] (0xc000732f00) (3) Data frame sent\nI0603 00:19:33.498151 1382 log.go:172] (0xc00075e0b0) Data frame received for 3\nI0603 00:19:33.498169 1382 log.go:172] (0xc000732f00) (3) Data frame handling\nI0603 00:19:33.500585 1382 log.go:172] (0xc00075e0b0) Data frame received for 1\nI0603 00:19:33.500616 1382 log.go:172] (0xc000732640) (1) Data frame handling\nI0603 00:19:33.500636 1382 log.go:172] (0xc000732640) (1) Data frame sent\nI0603 00:19:33.500655 1382 log.go:172] (0xc00075e0b0) (0xc000732640) Stream removed, broadcasting: 1\nI0603 00:19:33.500700 1382 log.go:172] (0xc00075e0b0) Go away received\nI0603 00:19:33.501345 1382 log.go:172] (0xc00075e0b0) (0xc000732640) Stream removed, broadcasting: 1\nI0603 00:19:33.501370 1382 log.go:172] (0xc00075e0b0) (0xc000732f00) Stream removed, broadcasting: 3\nI0603 00:19:33.501382 1382 log.go:172] (0xc00075e0b0) (0xc00069a320) Stream removed, broadcasting: 5\n" Jun 3 00:19:33.508: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 3 00:19:33.508: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jun 3 00:19:43.542: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 3 00:19:53.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3180 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 3 00:19:53.828: INFO: stderr: "I0603 00:19:53.733523 1413 log.go:172] (0xc000ab1340) (0xc000ad41e0) Create stream\nI0603 00:19:53.733587 1413 log.go:172] (0xc000ab1340) (0xc000ad41e0) Stream added, broadcasting: 1\nI0603 00:19:53.738316 1413 log.go:172] (0xc000ab1340) Reply frame received for 1\nI0603 00:19:53.738363 1413 log.go:172] (0xc000ab1340) (0xc000844fa0) Create stream\nI0603 00:19:53.738372 1413 log.go:172] (0xc000ab1340) (0xc000844fa0) Stream added, broadcasting: 3\nI0603 00:19:53.739210 1413 log.go:172] (0xc000ab1340) Reply frame received for 3\nI0603 00:19:53.739248 1413 log.go:172] (0xc000ab1340) (0xc00066ee60) Create stream\nI0603 00:19:53.739266 1413 log.go:172] (0xc000ab1340) (0xc00066ee60) Stream added, broadcasting: 5\nI0603 00:19:53.740137 1413 log.go:172] (0xc000ab1340) Reply frame received for 5\nI0603 00:19:53.822129 1413 log.go:172] (0xc000ab1340) Data frame received for 3\nI0603 00:19:53.822154 1413 log.go:172] (0xc000844fa0) (3) Data frame handling\nI0603 00:19:53.822174 1413 log.go:172] (0xc000ab1340) Data frame received for 5\nI0603 00:19:53.822208 1413 log.go:172] (0xc00066ee60) (5) Data frame handling\nI0603 00:19:53.822223 1413 log.go:172] (0xc00066ee60) (5) Data frame sent\nI0603 00:19:53.822234 1413 log.go:172] (0xc000ab1340) Data frame received for 5\nI0603 00:19:53.822243 1413 log.go:172] (0xc00066ee60) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0603 00:19:53.822267 1413 log.go:172] (0xc000844fa0) (3) Data frame sent\nI0603 00:19:53.822287 1413 log.go:172] (0xc000ab1340) Data frame received for 3\nI0603 00:19:53.822298 1413 log.go:172] (0xc000844fa0) (3) Data frame handling\nI0603 00:19:53.823100 1413 log.go:172] (0xc000ab1340) Data frame received for 1\nI0603 00:19:53.823113 1413 log.go:172] (0xc000ad41e0) (1) Data frame handling\nI0603 00:19:53.823125 1413 log.go:172] (0xc000ad41e0) (1) Data frame sent\nI0603 00:19:53.823135 1413 log.go:172] (0xc000ab1340) (0xc000ad41e0) Stream removed, broadcasting: 1\nI0603 00:19:53.823196 1413 log.go:172] (0xc000ab1340) Go away received\nI0603 00:19:53.823397 1413 log.go:172] (0xc000ab1340) (0xc000ad41e0) Stream removed, broadcasting: 1\nI0603 00:19:53.823407 1413 log.go:172] (0xc000ab1340) (0xc000844fa0) Stream removed, broadcasting: 3\nI0603 00:19:53.823412 1413 log.go:172] (0xc000ab1340) (0xc00066ee60) Stream removed, broadcasting: 5\n" Jun 3 00:19:53.828: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 3 00:19:53.828: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 3 00:20:23.849: INFO: Waiting for StatefulSet statefulset-3180/ss2 to complete update Jun 3 00:20:23.849: INFO: Waiting for Pod statefulset-3180/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Jun 3 00:20:33.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3180 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 3 00:20:34.149: INFO: stderr: "I0603 00:20:34.002279 1433 log.go:172] (0xc000a7a840) (0xc000a44640) Create stream\nI0603 00:20:34.002348 1433 log.go:172] (0xc000a7a840) (0xc000a44640) Stream added, broadcasting: 1\nI0603 00:20:34.006245 1433 log.go:172] (0xc000a7a840) Reply frame received for 1\nI0603 00:20:34.006286 1433 log.go:172] (0xc000a7a840) (0xc0005861e0) Create stream\nI0603 00:20:34.006306 1433 log.go:172] (0xc000a7a840) (0xc0005861e0) Stream added, broadcasting: 3\nI0603 00:20:34.007179 1433 log.go:172] (0xc000a7a840) Reply frame received for 3\nI0603 00:20:34.007230 1433 log.go:172] (0xc000a7a840) (0xc0004fcd20) Create stream\nI0603 00:20:34.007247 1433 log.go:172] (0xc000a7a840) (0xc0004fcd20) Stream added, broadcasting: 5\nI0603 00:20:34.008083 1433 log.go:172] (0xc000a7a840) Reply frame received for 5\nI0603 00:20:34.097233 1433 log.go:172] (0xc000a7a840) Data frame received for 5\nI0603 00:20:34.097295 1433 log.go:172] (0xc0004fcd20) (5) Data frame handling\nI0603 00:20:34.097306 1433 log.go:172] (0xc0004fcd20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0603 00:20:34.140977 1433 log.go:172] (0xc000a7a840) Data frame received for 3\nI0603 00:20:34.141014 1433 log.go:172] (0xc0005861e0) (3) Data frame handling\nI0603 00:20:34.141046 1433 log.go:172] (0xc0005861e0) (3) Data frame sent\nI0603 00:20:34.141067 1433 log.go:172] (0xc000a7a840) Data frame received for 3\nI0603 00:20:34.141091 1433 log.go:172] (0xc000a7a840) Data frame received for 5\nI0603 00:20:34.141102 1433 log.go:172] (0xc0004fcd20) (5) Data frame handling\nI0603 00:20:34.141316 1433 log.go:172] (0xc0005861e0) (3) Data frame handling\nI0603 00:20:34.143533 1433 log.go:172] (0xc000a7a840) Data frame received for 1\nI0603 00:20:34.143567 1433 log.go:172] (0xc000a44640) (1) Data frame handling\nI0603 00:20:34.143588 1433 log.go:172] (0xc000a44640) (1) Data frame sent\nI0603 00:20:34.143608 1433 log.go:172] (0xc000a7a840) (0xc000a44640) Stream removed, broadcasting: 1\nI0603 00:20:34.143631 1433 log.go:172] (0xc000a7a840) Go away received\nI0603 00:20:34.144119 1433 log.go:172] (0xc000a7a840) (0xc000a44640) Stream removed, broadcasting: 1\nI0603 00:20:34.144149 1433 log.go:172] (0xc000a7a840) (0xc0005861e0) Stream removed, broadcasting: 3\nI0603 00:20:34.144162 1433 log.go:172] (0xc000a7a840) (0xc0004fcd20) Stream removed, broadcasting: 5\n" Jun 3 00:20:34.149: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 3 00:20:34.149: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 3 00:20:44.180: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 3 00:20:54.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3180 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 3 00:20:54.506: INFO: stderr: "I0603 00:20:54.401588 1452 log.go:172] (0xc000b2afd0) (0xc0008777c0) Create stream\nI0603 00:20:54.401635 1452 log.go:172] (0xc000b2afd0) (0xc0008777c0) Stream added, broadcasting: 1\nI0603 00:20:54.403980 1452 log.go:172] (0xc000b2afd0) Reply frame received for 1\nI0603 00:20:54.404027 1452 log.go:172] (0xc000b2afd0) (0xc0003d65a0) Create stream\nI0603 00:20:54.404044 1452 log.go:172] (0xc000b2afd0) (0xc0003d65a0) Stream added, broadcasting: 3\nI0603 00:20:54.405287 1452 log.go:172] (0xc000b2afd0) Reply frame received for 3\nI0603 00:20:54.405320 1452 log.go:172] (0xc000b2afd0) (0xc0001980a0) Create stream\nI0603 00:20:54.405330 1452 log.go:172] (0xc000b2afd0) (0xc0001980a0) Stream added, broadcasting: 5\nI0603 00:20:54.406144 1452 log.go:172] (0xc000b2afd0) Reply frame received for 5\nI0603 00:20:54.495894 1452 log.go:172] (0xc000b2afd0) Data frame received for 3\nI0603 00:20:54.495919 1452 log.go:172] (0xc0003d65a0) (3) Data frame handling\nI0603 00:20:54.495937 1452 log.go:172] (0xc0003d65a0) (3) Data frame sent\nI0603 00:20:54.495945 1452 log.go:172] (0xc000b2afd0) Data frame received for 3\nI0603 00:20:54.495951 1452 log.go:172] (0xc0003d65a0) (3) Data frame handling\nI0603 00:20:54.496018 1452 log.go:172] (0xc000b2afd0) Data frame received for 5\nI0603 00:20:54.496035 1452 log.go:172] (0xc0001980a0) (5) Data frame handling\nI0603 00:20:54.496058 1452 log.go:172] (0xc0001980a0) (5) Data frame sent\nI0603 00:20:54.496069 1452 log.go:172] (0xc000b2afd0) Data frame received for 5\nI0603 00:20:54.496081 1452 log.go:172] (0xc0001980a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0603 00:20:54.498790 1452 log.go:172] (0xc000b2afd0) Data frame received for 1\nI0603 00:20:54.498812 1452 log.go:172] (0xc0008777c0) (1) Data frame handling\nI0603 00:20:54.498834 1452 log.go:172] (0xc0008777c0) (1) Data frame sent\nI0603 00:20:54.498852 1452 log.go:172] (0xc000b2afd0) (0xc0008777c0) Stream removed, broadcasting: 1\nI0603 00:20:54.498875 1452 log.go:172] (0xc000b2afd0) Go away received\nI0603 00:20:54.499728 1452 log.go:172] (0xc000b2afd0) (0xc0008777c0) Stream removed, broadcasting: 1\nI0603 00:20:54.499765 1452 log.go:172] (0xc000b2afd0) (0xc0003d65a0) Stream removed, broadcasting: 3\nI0603 00:20:54.499817 1452 log.go:172] (0xc000b2afd0) (0xc0001980a0) Stream removed, broadcasting: 5\n" Jun 3 00:20:54.506: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 3 00:20:54.506: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 3 00:21:14.529: INFO: Waiting for StatefulSet statefulset-3180/ss2 to complete update Jun 3 00:21:14.529: INFO: Waiting for Pod statefulset-3180/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 3 00:21:24.538: INFO: Deleting all statefulset in ns statefulset-3180 Jun 3 00:21:24.541: INFO: Scaling statefulset ss2 to 0 Jun 3 00:22:04.560: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 00:22:04.563: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:22:04.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3180" for this suite. • [SLOW TEST:174.407 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":152,"skipped":2328,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:22:04.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-n256 STEP: Creating a pod to test atomic-volume-subpath Jun 3 00:22:04.688: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-n256" in namespace "subpath-7955" to be "Succeeded or Failed" Jun 3 00:22:04.705: INFO: Pod "pod-subpath-test-downwardapi-n256": Phase="Pending", Reason="", readiness=false. Elapsed: 17.16885ms Jun 3 00:22:06.792: INFO: Pod "pod-subpath-test-downwardapi-n256": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104088586s Jun 3 00:22:08.797: INFO: Pod "pod-subpath-test-downwardapi-n256": Phase="Running", Reason="", readiness=true. Elapsed: 4.108686322s Jun 3 00:22:10.802: INFO: Pod "pod-subpath-test-downwardapi-n256": Phase="Running", Reason="", readiness=true. Elapsed: 6.114068599s Jun 3 00:22:12.807: INFO: Pod "pod-subpath-test-downwardapi-n256": Phase="Running", Reason="", readiness=true. Elapsed: 8.118471517s Jun 3 00:22:14.811: INFO: Pod "pod-subpath-test-downwardapi-n256": Phase="Running", Reason="", readiness=true. Elapsed: 10.123110822s Jun 3 00:22:16.816: INFO: Pod "pod-subpath-test-downwardapi-n256": Phase="Running", Reason="", readiness=true. Elapsed: 12.127282256s Jun 3 00:22:18.820: INFO: Pod "pod-subpath-test-downwardapi-n256": Phase="Running", Reason="", readiness=true. Elapsed: 14.131340304s Jun 3 00:22:20.824: INFO: Pod "pod-subpath-test-downwardapi-n256": Phase="Running", Reason="", readiness=true. Elapsed: 16.135744305s Jun 3 00:22:22.832: INFO: Pod "pod-subpath-test-downwardapi-n256": Phase="Running", Reason="", readiness=true. Elapsed: 18.143963318s Jun 3 00:22:24.837: INFO: Pod "pod-subpath-test-downwardapi-n256": Phase="Running", Reason="", readiness=true. Elapsed: 20.148795904s Jun 3 00:22:26.841: INFO: Pod "pod-subpath-test-downwardapi-n256": Phase="Running", Reason="", readiness=true. Elapsed: 22.152755649s Jun 3 00:22:28.846: INFO: Pod "pod-subpath-test-downwardapi-n256": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.157245702s STEP: Saw pod success Jun 3 00:22:28.846: INFO: Pod "pod-subpath-test-downwardapi-n256" satisfied condition "Succeeded or Failed" Jun 3 00:22:28.848: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-n256 container test-container-subpath-downwardapi-n256: STEP: delete the pod Jun 3 00:22:28.895: INFO: Waiting for pod pod-subpath-test-downwardapi-n256 to disappear Jun 3 00:22:28.908: INFO: Pod pod-subpath-test-downwardapi-n256 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-n256 Jun 3 00:22:28.908: INFO: Deleting pod "pod-subpath-test-downwardapi-n256" in namespace "subpath-7955" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:22:28.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7955" for this suite. • [SLOW TEST:24.324 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":153,"skipped":2337,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:22:28.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-d2256a04-f39b-42fd-9fe6-f0849a1925c9 STEP: Creating a pod to test consume secrets Jun 3 00:22:29.036: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b3c7e1ca-7487-4fb0-9f08-00759fee5968" in namespace "projected-5264" to be "Succeeded or Failed" Jun 3 00:22:29.046: INFO: Pod "pod-projected-secrets-b3c7e1ca-7487-4fb0-9f08-00759fee5968": Phase="Pending", Reason="", readiness=false. Elapsed: 10.242971ms Jun 3 00:22:31.050: INFO: Pod "pod-projected-secrets-b3c7e1ca-7487-4fb0-9f08-00759fee5968": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014057997s Jun 3 00:22:33.054: INFO: Pod "pod-projected-secrets-b3c7e1ca-7487-4fb0-9f08-00759fee5968": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018629539s STEP: Saw pod success Jun 3 00:22:33.054: INFO: Pod "pod-projected-secrets-b3c7e1ca-7487-4fb0-9f08-00759fee5968" satisfied condition "Succeeded or Failed" Jun 3 00:22:33.058: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-b3c7e1ca-7487-4fb0-9f08-00759fee5968 container projected-secret-volume-test: STEP: delete the pod Jun 3 00:22:33.140: INFO: Waiting for pod pod-projected-secrets-b3c7e1ca-7487-4fb0-9f08-00759fee5968 to disappear Jun 3 00:22:33.147: INFO: Pod pod-projected-secrets-b3c7e1ca-7487-4fb0-9f08-00759fee5968 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:22:33.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5264" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":154,"skipped":2347,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:22:33.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jun 3 00:22:33.206: INFO: >>> kubeConfig: /root/.kube/config Jun 3 00:22:35.194: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:22:45.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6389" for this suite. • [SLOW TEST:12.662 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":155,"skipped":2380,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:22:45.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 3 00:22:45.909: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c36d669c-12a7-43dc-aed5-d4a2c7837fa0" in namespace "downward-api-9534" to be "Succeeded or Failed" Jun 3 00:22:45.914: INFO: Pod "downwardapi-volume-c36d669c-12a7-43dc-aed5-d4a2c7837fa0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.970739ms Jun 3 00:22:47.918: INFO: Pod "downwardapi-volume-c36d669c-12a7-43dc-aed5-d4a2c7837fa0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008233593s Jun 3 00:22:49.952: INFO: Pod "downwardapi-volume-c36d669c-12a7-43dc-aed5-d4a2c7837fa0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042897022s STEP: Saw pod success Jun 3 00:22:49.952: INFO: Pod "downwardapi-volume-c36d669c-12a7-43dc-aed5-d4a2c7837fa0" satisfied condition "Succeeded or Failed" Jun 3 00:22:49.955: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-c36d669c-12a7-43dc-aed5-d4a2c7837fa0 container client-container: STEP: delete the pod Jun 3 00:22:49.980: INFO: Waiting for pod downwardapi-volume-c36d669c-12a7-43dc-aed5-d4a2c7837fa0 to disappear Jun 3 00:22:49.992: INFO: Pod downwardapi-volume-c36d669c-12a7-43dc-aed5-d4a2c7837fa0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:22:49.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9534" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":156,"skipped":2392,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:22:49.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 00:22:50.485: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 00:22:52.494: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740570, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740570, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740570, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740570, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 00:22:55.530: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:22:55.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:22:56.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1224" for this suite. STEP: Destroying namespace "webhook-1224-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.901 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":157,"skipped":2398,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:22:56.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:22:56.982: INFO: Waiting up to 5m0s for pod "busybox-user-65534-3c5a8bc1-82c4-4301-a988-6027264672bf" in namespace "security-context-test-3664" to be "Succeeded or Failed" Jun 3 00:22:56.989: INFO: Pod "busybox-user-65534-3c5a8bc1-82c4-4301-a988-6027264672bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.385059ms Jun 3 00:22:58.993: INFO: Pod "busybox-user-65534-3c5a8bc1-82c4-4301-a988-6027264672bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010006124s Jun 3 00:23:01.000: INFO: Pod "busybox-user-65534-3c5a8bc1-82c4-4301-a988-6027264672bf": Phase="Running", Reason="", readiness=true. Elapsed: 4.017934488s Jun 3 00:23:03.005: INFO: Pod "busybox-user-65534-3c5a8bc1-82c4-4301-a988-6027264672bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022584559s Jun 3 00:23:03.005: INFO: Pod "busybox-user-65534-3c5a8bc1-82c4-4301-a988-6027264672bf" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:23:03.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3664" for this suite. • [SLOW TEST:6.114 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":158,"skipped":2432,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:23:03.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 00:23:03.808: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 00:23:05.868: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740583, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740583, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740583, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740583, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 00:23:08.903: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:23:19.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2760" for this suite. STEP: Destroying namespace "webhook-2760-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.161 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":159,"skipped":2437,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:23:19.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-968 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-968 STEP: Deleting pre-stop pod Jun 3 00:23:34.353: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:23:34.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-968" for this suite. • [SLOW TEST:15.226 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":160,"skipped":2453,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:23:34.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Jun 3 00:23:38.852: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-6641 PodName:var-expansion-7ff3acfa-03f4-4a42-8aee-6f4b9394ad90 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 00:23:38.852: INFO: >>> kubeConfig: /root/.kube/config I0603 00:23:38.888737 7 log.go:172] (0xc00299e000) (0xc00201c780) Create stream I0603 00:23:38.888798 7 log.go:172] (0xc00299e000) (0xc00201c780) Stream added, broadcasting: 1 I0603 00:23:38.891504 7 log.go:172] (0xc00299e000) Reply frame received for 1 I0603 00:23:38.891534 7 log.go:172] (0xc00299e000) (0xc0025c7680) Create stream I0603 00:23:38.891544 7 log.go:172] (0xc00299e000) (0xc0025c7680) Stream added, broadcasting: 3 I0603 00:23:38.892479 7 log.go:172] (0xc00299e000) Reply frame received for 3 I0603 00:23:38.892515 7 log.go:172] (0xc00299e000) (0xc0026a06e0) Create stream I0603 00:23:38.892534 7 log.go:172] (0xc00299e000) (0xc0026a06e0) Stream added, broadcasting: 5 I0603 00:23:38.893806 7 log.go:172] (0xc00299e000) Reply frame received for 5 I0603 00:23:38.979636 7 log.go:172] (0xc00299e000) Data frame received for 3 I0603 00:23:38.979691 7 log.go:172] (0xc0025c7680) (3) Data frame handling I0603 00:23:38.979720 7 log.go:172] (0xc00299e000) Data frame received for 5 I0603 00:23:38.979734 7 log.go:172] (0xc0026a06e0) (5) Data frame handling I0603 00:23:38.981495 7 log.go:172] (0xc00299e000) Data frame received for 1 I0603 00:23:38.981506 7 log.go:172] (0xc00201c780) (1) Data frame handling I0603 00:23:38.981517 7 log.go:172] (0xc00201c780) (1) Data frame sent I0603 00:23:38.981699 7 log.go:172] (0xc00299e000) (0xc00201c780) Stream removed, broadcasting: 1 I0603 00:23:38.981728 7 log.go:172] (0xc00299e000) Go away received I0603 00:23:38.981802 7 log.go:172] (0xc00299e000) (0xc00201c780) Stream removed, broadcasting: 1 I0603 00:23:38.981818 7 log.go:172] (0xc00299e000) (0xc0025c7680) Stream removed, broadcasting: 3 I0603 00:23:38.981826 7 log.go:172] (0xc00299e000) (0xc0026a06e0) Stream removed, broadcasting: 5 STEP: test for file in mounted path Jun 3 00:23:38.986: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-6641 PodName:var-expansion-7ff3acfa-03f4-4a42-8aee-6f4b9394ad90 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 00:23:38.986: INFO: >>> kubeConfig: /root/.kube/config I0603 00:23:39.015484 7 log.go:172] (0xc000fa6fd0) (0xc0020940a0) Create stream I0603 00:23:39.015520 7 log.go:172] (0xc000fa6fd0) (0xc0020940a0) Stream added, broadcasting: 1 I0603 00:23:39.018388 7 log.go:172] (0xc000fa6fd0) Reply frame received for 1 I0603 00:23:39.018443 7 log.go:172] (0xc000fa6fd0) (0xc0020941e0) Create stream I0603 00:23:39.018459 7 log.go:172] (0xc000fa6fd0) (0xc0020941e0) Stream added, broadcasting: 3 I0603 00:23:39.019201 7 log.go:172] (0xc000fa6fd0) Reply frame received for 3 I0603 00:23:39.019238 7 log.go:172] (0xc000fa6fd0) (0xc0026a0780) Create stream I0603 00:23:39.019250 7 log.go:172] (0xc000fa6fd0) (0xc0026a0780) Stream added, broadcasting: 5 I0603 00:23:39.020048 7 log.go:172] (0xc000fa6fd0) Reply frame received for 5 I0603 00:23:39.090783 7 log.go:172] (0xc000fa6fd0) Data frame received for 5 I0603 00:23:39.090814 7 log.go:172] (0xc0026a0780) (5) Data frame handling I0603 00:23:39.090832 7 log.go:172] (0xc000fa6fd0) Data frame received for 3 I0603 00:23:39.090841 7 log.go:172] (0xc0020941e0) (3) Data frame handling I0603 00:23:39.092270 7 log.go:172] (0xc000fa6fd0) Data frame received for 1 I0603 00:23:39.092294 7 log.go:172] (0xc0020940a0) (1) Data frame handling I0603 00:23:39.092316 7 log.go:172] (0xc0020940a0) (1) Data frame sent I0603 00:23:39.092328 7 log.go:172] (0xc000fa6fd0) (0xc0020940a0) Stream removed, broadcasting: 1 I0603 00:23:39.092398 7 log.go:172] (0xc000fa6fd0) (0xc0020940a0) Stream removed, broadcasting: 1 I0603 00:23:39.092412 7 log.go:172] (0xc000fa6fd0) (0xc0020941e0) Stream removed, broadcasting: 3 I0603 00:23:39.092470 7 log.go:172] (0xc000fa6fd0) Go away received I0603 00:23:39.092590 7 log.go:172] (0xc000fa6fd0) (0xc0026a0780) Stream removed, broadcasting: 5 STEP: updating the annotation value Jun 3 00:23:39.628: INFO: Successfully updated pod "var-expansion-7ff3acfa-03f4-4a42-8aee-6f4b9394ad90" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Jun 3 00:23:39.679: INFO: Deleting pod "var-expansion-7ff3acfa-03f4-4a42-8aee-6f4b9394ad90" in namespace "var-expansion-6641" Jun 3 00:23:39.687: INFO: Wait up to 5m0s for pod "var-expansion-7ff3acfa-03f4-4a42-8aee-6f4b9394ad90" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:24:15.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6641" for this suite. • [SLOW TEST:41.425 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":161,"skipped":2507,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:24:15.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 00:24:16.595: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 00:24:18.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740656, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740656, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740656, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740656, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 00:24:21.656: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:24:22.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8719" for this suite. STEP: Destroying namespace "webhook-8719-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.516 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":162,"skipped":2511,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:24:22.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 3 00:24:30.531: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 00:24:30.553: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 00:24:32.553: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 00:24:32.583: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 00:24:34.553: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 00:24:34.557: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:24:34.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5883" for this suite. • [SLOW TEST:12.221 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":163,"skipped":2517,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:24:34.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:24:39.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5917" for this suite. • [SLOW TEST:5.156 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":164,"skipped":2525,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:24:39.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-46f975d8-68a7-4abf-9a88-0ee7744d451f STEP: Creating a pod to test consume configMaps Jun 3 00:24:39.825: INFO: Waiting up to 5m0s for pod "pod-configmaps-a2db93b9-17e8-4e32-8491-2714a5297c3a" in namespace "configmap-6632" to be "Succeeded or Failed" Jun 3 00:24:39.905: INFO: Pod "pod-configmaps-a2db93b9-17e8-4e32-8491-2714a5297c3a": Phase="Pending", Reason="", readiness=false. Elapsed: 80.433035ms Jun 3 00:24:41.909: INFO: Pod "pod-configmaps-a2db93b9-17e8-4e32-8491-2714a5297c3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084294003s Jun 3 00:24:43.914: INFO: Pod "pod-configmaps-a2db93b9-17e8-4e32-8491-2714a5297c3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089000292s STEP: Saw pod success Jun 3 00:24:43.914: INFO: Pod "pod-configmaps-a2db93b9-17e8-4e32-8491-2714a5297c3a" satisfied condition "Succeeded or Failed" Jun 3 00:24:43.917: INFO: Trying to get logs from node latest-worker pod pod-configmaps-a2db93b9-17e8-4e32-8491-2714a5297c3a container configmap-volume-test: STEP: delete the pod Jun 3 00:24:43.972: INFO: Waiting for pod pod-configmaps-a2db93b9-17e8-4e32-8491-2714a5297c3a to disappear Jun 3 00:24:43.987: INFO: Pod pod-configmaps-a2db93b9-17e8-4e32-8491-2714a5297c3a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:24:43.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6632" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":165,"skipped":2559,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:24:43.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Jun 3 00:24:44.129: INFO: Waiting up to 5m0s for pod "var-expansion-1695b40d-b146-4141-8d96-30336860390a" in namespace "var-expansion-4270" to be "Succeeded or Failed" Jun 3 00:24:44.137: INFO: Pod "var-expansion-1695b40d-b146-4141-8d96-30336860390a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.150576ms Jun 3 00:24:46.157: INFO: Pod "var-expansion-1695b40d-b146-4141-8d96-30336860390a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028157043s Jun 3 00:24:48.162: INFO: Pod "var-expansion-1695b40d-b146-4141-8d96-30336860390a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033207509s STEP: Saw pod success Jun 3 00:24:48.162: INFO: Pod "var-expansion-1695b40d-b146-4141-8d96-30336860390a" satisfied condition "Succeeded or Failed" Jun 3 00:24:48.166: INFO: Trying to get logs from node latest-worker pod var-expansion-1695b40d-b146-4141-8d96-30336860390a container dapi-container: STEP: delete the pod Jun 3 00:24:48.201: INFO: Waiting for pod var-expansion-1695b40d-b146-4141-8d96-30336860390a to disappear Jun 3 00:24:48.213: INFO: Pod var-expansion-1695b40d-b146-4141-8d96-30336860390a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:24:48.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4270" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":166,"skipped":2572,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:24:48.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-88cb445b-ee73-4b21-8187-4f0e4b9828ea STEP: Creating a pod to test consume secrets Jun 3 00:24:48.325: INFO: Waiting up to 5m0s for pod "pod-secrets-a6fca9b2-e26a-4424-a415-f6d4385a3c30" in namespace "secrets-2638" to be "Succeeded or Failed" Jun 3 00:24:48.352: INFO: Pod "pod-secrets-a6fca9b2-e26a-4424-a415-f6d4385a3c30": Phase="Pending", Reason="", readiness=false. Elapsed: 27.60646ms Jun 3 00:24:50.372: INFO: Pod "pod-secrets-a6fca9b2-e26a-4424-a415-f6d4385a3c30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047584177s Jun 3 00:24:52.376: INFO: Pod "pod-secrets-a6fca9b2-e26a-4424-a415-f6d4385a3c30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051326806s STEP: Saw pod success Jun 3 00:24:52.376: INFO: Pod "pod-secrets-a6fca9b2-e26a-4424-a415-f6d4385a3c30" satisfied condition "Succeeded or Failed" Jun 3 00:24:52.379: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-a6fca9b2-e26a-4424-a415-f6d4385a3c30 container secret-volume-test: STEP: delete the pod Jun 3 00:24:52.416: INFO: Waiting for pod pod-secrets-a6fca9b2-e26a-4424-a415-f6d4385a3c30 to disappear Jun 3 00:24:52.486: INFO: Pod pod-secrets-a6fca9b2-e26a-4424-a415-f6d4385a3c30 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:24:52.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2638" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":167,"skipped":2600,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:24:52.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:24:52.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-729' Jun 3 00:24:53.017: INFO: stderr: "" Jun 3 00:24:53.018: INFO: stdout: "replicationcontroller/agnhost-master created\n" Jun 3 00:24:53.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-729' Jun 3 00:24:53.306: INFO: stderr: "" Jun 3 00:24:53.306: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 3 00:24:54.311: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 00:24:54.311: INFO: Found 0 / 1 Jun 3 00:24:55.311: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 00:24:55.312: INFO: Found 0 / 1 Jun 3 00:24:56.310: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 00:24:56.310: INFO: Found 0 / 1 Jun 3 00:24:57.311: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 00:24:57.311: INFO: Found 1 / 1 Jun 3 00:24:57.311: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 3 00:24:57.314: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 00:24:57.314: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 3 00:24:57.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-hhp6k --namespace=kubectl-729' Jun 3 00:24:57.444: INFO: stderr: "" Jun 3 00:24:57.444: INFO: stdout: "Name: agnhost-master-hhp6k\nNamespace: kubectl-729\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Wed, 03 Jun 2020 00:24:53 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.186\nIPs:\n IP: 10.244.1.186\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://1ac2d32bcd3638178b8919e7b5c770ee165dbc93a6f20a3261ef11ac39e85db4\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 03 Jun 2020 00:24:55 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-rfb6g (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-rfb6g:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-rfb6g\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-729/agnhost-master-hhp6k to latest-worker\n Normal Pulled 3s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 2s kubelet, latest-worker Created container agnhost-master\n Normal Started 2s kubelet, latest-worker Started container agnhost-master\n" Jun 3 00:24:57.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-729' Jun 3 00:24:57.595: INFO: stderr: "" Jun 3 00:24:57.595: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-729\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-hhp6k\n" Jun 3 00:24:57.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-729' Jun 3 00:24:57.738: INFO: stderr: "" Jun 3 00:24:57.738: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-729\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.109.154.38\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.186:6379\nSession Affinity: None\nEvents: \n" Jun 3 00:24:57.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' Jun 3 00:24:57.880: INFO: stderr: "" Jun 3 00:24:57.880: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Wed, 03 Jun 2020 00:24:56 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 03 Jun 2020 00:21:34 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 03 Jun 2020 00:21:34 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 03 Jun 2020 00:21:34 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 03 Jun 2020 00:21:34 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 34d\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 34d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 34d\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 34d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 34d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 34d\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 34d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 34d\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 34d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Jun 3 00:24:57.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-729' Jun 3 00:24:57.983: INFO: stderr: "" Jun 3 00:24:57.983: INFO: stdout: "Name: kubectl-729\nLabels: e2e-framework=kubectl\n e2e-run=6f7bef9f-c3a0-4567-970d-1ae4b3b83615\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:24:57.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-729" for this suite. • [SLOW TEST:5.495 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1083 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":168,"skipped":2605,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:24:57.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-3fdd9080-ae9d-4538-b585-7d67da924538 STEP: Creating a pod to test consume configMaps Jun 3 00:24:58.140: INFO: Waiting up to 5m0s for pod "pod-configmaps-315a06a8-4043-4c8d-9fb9-e059002c06fd" in namespace "configmap-4196" to be "Succeeded or Failed" Jun 3 00:24:58.152: INFO: Pod "pod-configmaps-315a06a8-4043-4c8d-9fb9-e059002c06fd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.747378ms Jun 3 00:25:00.157: INFO: Pod "pod-configmaps-315a06a8-4043-4c8d-9fb9-e059002c06fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016666224s Jun 3 00:25:02.161: INFO: Pod "pod-configmaps-315a06a8-4043-4c8d-9fb9-e059002c06fd": Phase="Running", Reason="", readiness=true. Elapsed: 4.02127508s Jun 3 00:25:04.166: INFO: Pod "pod-configmaps-315a06a8-4043-4c8d-9fb9-e059002c06fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026086374s STEP: Saw pod success Jun 3 00:25:04.166: INFO: Pod "pod-configmaps-315a06a8-4043-4c8d-9fb9-e059002c06fd" satisfied condition "Succeeded or Failed" Jun 3 00:25:04.169: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-315a06a8-4043-4c8d-9fb9-e059002c06fd container configmap-volume-test: STEP: delete the pod Jun 3 00:25:04.227: INFO: Waiting for pod pod-configmaps-315a06a8-4043-4c8d-9fb9-e059002c06fd to disappear Jun 3 00:25:04.232: INFO: Pod pod-configmaps-315a06a8-4043-4c8d-9fb9-e059002c06fd no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:25:04.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4196" for this suite. • [SLOW TEST:6.249 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":169,"skipped":2627,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:25:04.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:25:11.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5583" for this suite. • [SLOW TEST:7.105 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":170,"skipped":2642,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:25:11.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 3 00:25:11.433: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5733908d-4f1f-430a-b8e4-ac272c28e865" in namespace "projected-2065" to be "Succeeded or Failed" Jun 3 00:25:11.443: INFO: Pod "downwardapi-volume-5733908d-4f1f-430a-b8e4-ac272c28e865": Phase="Pending", Reason="", readiness=false. Elapsed: 10.243066ms Jun 3 00:25:13.595: INFO: Pod "downwardapi-volume-5733908d-4f1f-430a-b8e4-ac272c28e865": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161931675s Jun 3 00:25:15.599: INFO: Pod "downwardapi-volume-5733908d-4f1f-430a-b8e4-ac272c28e865": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.166259034s STEP: Saw pod success Jun 3 00:25:15.599: INFO: Pod "downwardapi-volume-5733908d-4f1f-430a-b8e4-ac272c28e865" satisfied condition "Succeeded or Failed" Jun 3 00:25:15.602: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-5733908d-4f1f-430a-b8e4-ac272c28e865 container client-container: STEP: delete the pod Jun 3 00:25:15.676: INFO: Waiting for pod downwardapi-volume-5733908d-4f1f-430a-b8e4-ac272c28e865 to disappear Jun 3 00:25:15.683: INFO: Pod downwardapi-volume-5733908d-4f1f-430a-b8e4-ac272c28e865 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:25:15.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2065" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":171,"skipped":2644,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:25:15.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 3 00:25:15.744: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d4433073-c3a3-446b-8e20-5304bb87b21e" in namespace "projected-1349" to be "Succeeded or Failed" Jun 3 00:25:15.757: INFO: Pod "downwardapi-volume-d4433073-c3a3-446b-8e20-5304bb87b21e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.664963ms Jun 3 00:25:17.761: INFO: Pod "downwardapi-volume-d4433073-c3a3-446b-8e20-5304bb87b21e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016601882s Jun 3 00:25:19.765: INFO: Pod "downwardapi-volume-d4433073-c3a3-446b-8e20-5304bb87b21e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02109393s STEP: Saw pod success Jun 3 00:25:19.765: INFO: Pod "downwardapi-volume-d4433073-c3a3-446b-8e20-5304bb87b21e" satisfied condition "Succeeded or Failed" Jun 3 00:25:19.769: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d4433073-c3a3-446b-8e20-5304bb87b21e container client-container: STEP: delete the pod Jun 3 00:25:19.797: INFO: Waiting for pod downwardapi-volume-d4433073-c3a3-446b-8e20-5304bb87b21e to disappear Jun 3 00:25:19.809: INFO: Pod downwardapi-volume-d4433073-c3a3-446b-8e20-5304bb87b21e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:25:19.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1349" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":172,"skipped":2699,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:25:19.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 3 00:25:24.036: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:25:24.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8785" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":173,"skipped":2715,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:25:24.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-d6164298-4c34-450f-8a3a-49a2363d2e37 STEP: Creating a pod to test consume secrets Jun 3 00:25:24.332: INFO: Waiting up to 5m0s for pod "pod-secrets-ca6bca46-48d8-4454-a05c-d83334e83cf6" in namespace "secrets-1839" to be "Succeeded or Failed" Jun 3 00:25:24.360: INFO: Pod "pod-secrets-ca6bca46-48d8-4454-a05c-d83334e83cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 28.453988ms Jun 3 00:25:26.366: INFO: Pod "pod-secrets-ca6bca46-48d8-4454-a05c-d83334e83cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033648649s Jun 3 00:25:28.370: INFO: Pod "pod-secrets-ca6bca46-48d8-4454-a05c-d83334e83cf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038041687s STEP: Saw pod success Jun 3 00:25:28.370: INFO: Pod "pod-secrets-ca6bca46-48d8-4454-a05c-d83334e83cf6" satisfied condition "Succeeded or Failed" Jun 3 00:25:28.373: INFO: Trying to get logs from node latest-worker pod pod-secrets-ca6bca46-48d8-4454-a05c-d83334e83cf6 container secret-volume-test: STEP: delete the pod Jun 3 00:25:28.398: INFO: Waiting for pod pod-secrets-ca6bca46-48d8-4454-a05c-d83334e83cf6 to disappear Jun 3 00:25:28.413: INFO: Pod pod-secrets-ca6bca46-48d8-4454-a05c-d83334e83cf6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:25:28.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1839" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":174,"skipped":2724,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:25:28.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:25:32.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7061" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":175,"skipped":2725,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:25:32.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 00:25:33.362: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 00:25:35.372: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740733, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740733, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740733, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726740733, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 00:25:38.445: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:25:38.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5380" for this suite. STEP: Destroying namespace "webhook-5380-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.373 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":176,"skipped":2733,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:25:39.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:25:39.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4648" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":177,"skipped":2743,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:25:39.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:27:39.274: INFO: Deleting pod "var-expansion-99774040-a08f-4078-845a-e3a5918344c1" in namespace "var-expansion-8602" Jun 3 00:27:39.279: INFO: Wait up to 5m0s for pod "var-expansion-99774040-a08f-4078-845a-e3a5918344c1" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:27:43.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8602" for this suite. • [SLOW TEST:124.211 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":178,"skipped":2800,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:27:43.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 00:27:43.401: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 00:27:43.413: INFO: Waiting for terminating namespaces to be deleted... Jun 3 00:27:43.416: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jun 3 00:27:43.420: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jun 3 00:27:43.420: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jun 3 00:27:43.420: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jun 3 00:27:43.420: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jun 3 00:27:43.420: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 3 00:27:43.420: INFO: Container kindnet-cni ready: true, restart count 2 Jun 3 00:27:43.420: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 3 00:27:43.420: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 00:27:43.420: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jun 3 00:27:43.425: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jun 3 00:27:43.425: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jun 3 00:27:43.425: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jun 3 00:27:43.425: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jun 3 00:27:43.425: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 3 00:27:43.425: INFO: Container kindnet-cni ready: true, restart count 2 Jun 3 00:27:43.425: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 3 00:27:43.425: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1614e1187101e5b0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1614e11871b29a02], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:27:44.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6018" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":179,"skipped":2934,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:27:44.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 3 00:27:44.532: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a1e5e51-d7fd-4c8c-8dec-1f05e68033ac" in namespace "downward-api-9482" to be "Succeeded or Failed" Jun 3 00:27:44.537: INFO: Pod "downwardapi-volume-5a1e5e51-d7fd-4c8c-8dec-1f05e68033ac": Phase="Pending", Reason="", readiness=false. Elapsed: 5.595194ms Jun 3 00:27:46.541: INFO: Pod "downwardapi-volume-5a1e5e51-d7fd-4c8c-8dec-1f05e68033ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009753639s Jun 3 00:27:48.550: INFO: Pod "downwardapi-volume-5a1e5e51-d7fd-4c8c-8dec-1f05e68033ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017890639s STEP: Saw pod success Jun 3 00:27:48.550: INFO: Pod "downwardapi-volume-5a1e5e51-d7fd-4c8c-8dec-1f05e68033ac" satisfied condition "Succeeded or Failed" Jun 3 00:27:48.552: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-5a1e5e51-d7fd-4c8c-8dec-1f05e68033ac container client-container: STEP: delete the pod Jun 3 00:27:48.608: INFO: Waiting for pod downwardapi-volume-5a1e5e51-d7fd-4c8c-8dec-1f05e68033ac to disappear Jun 3 00:27:48.620: INFO: Pod downwardapi-volume-5a1e5e51-d7fd-4c8c-8dec-1f05e68033ac no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:27:48.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9482" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":180,"skipped":2949,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:27:48.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jun 3 00:27:48.689: INFO: Waiting up to 5m0s for pod "downward-api-812a8200-b2b8-41d3-8e56-308f62bee30e" in namespace "downward-api-567" to be "Succeeded or Failed" Jun 3 00:27:48.707: INFO: Pod "downward-api-812a8200-b2b8-41d3-8e56-308f62bee30e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.669849ms Jun 3 00:27:50.711: INFO: Pod "downward-api-812a8200-b2b8-41d3-8e56-308f62bee30e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022001447s Jun 3 00:27:52.716: INFO: Pod "downward-api-812a8200-b2b8-41d3-8e56-308f62bee30e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02647682s STEP: Saw pod success Jun 3 00:27:52.716: INFO: Pod "downward-api-812a8200-b2b8-41d3-8e56-308f62bee30e" satisfied condition "Succeeded or Failed" Jun 3 00:27:52.719: INFO: Trying to get logs from node latest-worker pod downward-api-812a8200-b2b8-41d3-8e56-308f62bee30e container dapi-container: STEP: delete the pod Jun 3 00:27:52.758: INFO: Waiting for pod downward-api-812a8200-b2b8-41d3-8e56-308f62bee30e to disappear Jun 3 00:27:52.765: INFO: Pod downward-api-812a8200-b2b8-41d3-8e56-308f62bee30e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:27:52.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-567" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":181,"skipped":2960,"failed":0} S ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:27:52.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-5134 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5134 to expose endpoints map[] Jun 3 00:27:52.886: INFO: Get endpoints failed (3.374666ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jun 3 00:27:53.888: INFO: successfully validated that service multi-endpoint-test in namespace services-5134 exposes endpoints map[] (1.006111765s elapsed) STEP: Creating pod pod1 in namespace services-5134 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5134 to expose endpoints map[pod1:[100]] Jun 3 00:27:58.019: INFO: successfully validated that service multi-endpoint-test in namespace services-5134 exposes endpoints map[pod1:[100]] (4.124825224s elapsed) STEP: Creating pod pod2 in namespace services-5134 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5134 to expose endpoints map[pod1:[100] pod2:[101]] Jun 3 00:28:02.249: INFO: successfully validated that service multi-endpoint-test in namespace services-5134 exposes endpoints map[pod1:[100] pod2:[101]] (4.22499059s elapsed) STEP: Deleting pod pod1 in namespace services-5134 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5134 to expose endpoints map[pod2:[101]] Jun 3 00:28:03.291: INFO: successfully validated that service multi-endpoint-test in namespace services-5134 exposes endpoints map[pod2:[101]] (1.01323482s elapsed) STEP: Deleting pod pod2 in namespace services-5134 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5134 to expose endpoints map[] Jun 3 00:28:04.458: INFO: successfully validated that service multi-endpoint-test in namespace services-5134 exposes endpoints map[] (1.162306034s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:28:04.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5134" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:11.780 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":182,"skipped":2961,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:28:04.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jun 3 00:28:04.674: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:28:10.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7051" for this suite. • [SLOW TEST:6.067 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":183,"skipped":2974,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:28:10.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:28:10.714: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 3 00:28:12.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2307 create -f -' Jun 3 00:28:15.841: INFO: stderr: "" Jun 3 00:28:15.841: INFO: stdout: "e2e-test-crd-publish-openapi-9170-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 3 00:28:15.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2307 delete e2e-test-crd-publish-openapi-9170-crds test-cr' Jun 3 00:28:15.980: INFO: stderr: "" Jun 3 00:28:15.980: INFO: stdout: "e2e-test-crd-publish-openapi-9170-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jun 3 00:28:15.980: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2307 apply -f -' Jun 3 00:28:16.293: INFO: stderr: "" Jun 3 00:28:16.293: INFO: stdout: "e2e-test-crd-publish-openapi-9170-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 3 00:28:16.293: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2307 delete e2e-test-crd-publish-openapi-9170-crds test-cr' Jun 3 00:28:16.407: INFO: stderr: "" Jun 3 00:28:16.407: INFO: stdout: "e2e-test-crd-publish-openapi-9170-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jun 3 00:28:16.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9170-crds' Jun 3 00:28:17.159: INFO: stderr: "" Jun 3 00:28:17.159: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9170-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:28:20.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2307" for this suite. • [SLOW TEST:9.441 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":184,"skipped":3010,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:28:20.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 3 00:28:20.158: INFO: Waiting up to 5m0s for pod "downwardapi-volume-288fa28a-48a6-4fce-9141-202c57514777" in namespace "downward-api-6171" to be "Succeeded or Failed" Jun 3 00:28:20.172: INFO: Pod "downwardapi-volume-288fa28a-48a6-4fce-9141-202c57514777": Phase="Pending", Reason="", readiness=false. Elapsed: 14.332064ms Jun 3 00:28:22.176: INFO: Pod "downwardapi-volume-288fa28a-48a6-4fce-9141-202c57514777": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018320268s Jun 3 00:28:24.180: INFO: Pod "downwardapi-volume-288fa28a-48a6-4fce-9141-202c57514777": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022544267s STEP: Saw pod success Jun 3 00:28:24.180: INFO: Pod "downwardapi-volume-288fa28a-48a6-4fce-9141-202c57514777" satisfied condition "Succeeded or Failed" Jun 3 00:28:24.183: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-288fa28a-48a6-4fce-9141-202c57514777 container client-container: STEP: delete the pod Jun 3 00:28:24.215: INFO: Waiting for pod downwardapi-volume-288fa28a-48a6-4fce-9141-202c57514777 to disappear Jun 3 00:28:24.227: INFO: Pod downwardapi-volume-288fa28a-48a6-4fce-9141-202c57514777 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:28:24.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6171" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":185,"skipped":3072,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:28:24.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Jun 3 00:28:24.492: INFO: Waiting up to 5m0s for pod "pod-4abb0f9d-4d07-4085-a05b-ae939fb51981" in namespace "emptydir-9079" to be "Succeeded or Failed" Jun 3 00:28:24.563: INFO: Pod "pod-4abb0f9d-4d07-4085-a05b-ae939fb51981": Phase="Pending", Reason="", readiness=false. Elapsed: 70.899615ms Jun 3 00:28:26.568: INFO: Pod "pod-4abb0f9d-4d07-4085-a05b-ae939fb51981": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075397586s Jun 3 00:28:28.572: INFO: Pod "pod-4abb0f9d-4d07-4085-a05b-ae939fb51981": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07975935s STEP: Saw pod success Jun 3 00:28:28.572: INFO: Pod "pod-4abb0f9d-4d07-4085-a05b-ae939fb51981" satisfied condition "Succeeded or Failed" Jun 3 00:28:28.576: INFO: Trying to get logs from node latest-worker2 pod pod-4abb0f9d-4d07-4085-a05b-ae939fb51981 container test-container: STEP: delete the pod Jun 3 00:28:28.614: INFO: Waiting for pod pod-4abb0f9d-4d07-4085-a05b-ae939fb51981 to disappear Jun 3 00:28:28.619: INFO: Pod pod-4abb0f9d-4d07-4085-a05b-ae939fb51981 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:28:28.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9079" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":186,"skipped":3081,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:28:28.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Jun 3 00:28:28.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2991' Jun 3 00:28:29.073: INFO: stderr: "" Jun 3 00:28:29.073: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 3 00:28:29.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2991' Jun 3 00:28:29.232: INFO: stderr: "" Jun 3 00:28:29.232: INFO: stdout: "update-demo-nautilus-jn5pq update-demo-nautilus-kjcdf " Jun 3 00:28:29.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jn5pq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2991' Jun 3 00:28:29.336: INFO: stderr: "" Jun 3 00:28:29.336: INFO: stdout: "" Jun 3 00:28:29.336: INFO: update-demo-nautilus-jn5pq is created but not running Jun 3 00:28:34.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2991' Jun 3 00:28:34.446: INFO: stderr: "" Jun 3 00:28:34.446: INFO: stdout: "update-demo-nautilus-jn5pq update-demo-nautilus-kjcdf " Jun 3 00:28:34.446: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jn5pq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2991' Jun 3 00:28:34.547: INFO: stderr: "" Jun 3 00:28:34.547: INFO: stdout: "true" Jun 3 00:28:34.547: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jn5pq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2991' Jun 3 00:28:34.646: INFO: stderr: "" Jun 3 00:28:34.646: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 3 00:28:34.646: INFO: validating pod update-demo-nautilus-jn5pq Jun 3 00:28:34.650: INFO: got data: { "image": "nautilus.jpg" } Jun 3 00:28:34.650: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 00:28:34.650: INFO: update-demo-nautilus-jn5pq is verified up and running Jun 3 00:28:34.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kjcdf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2991' Jun 3 00:28:34.747: INFO: stderr: "" Jun 3 00:28:34.747: INFO: stdout: "true" Jun 3 00:28:34.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kjcdf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2991' Jun 3 00:28:34.846: INFO: stderr: "" Jun 3 00:28:34.846: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 3 00:28:34.846: INFO: validating pod update-demo-nautilus-kjcdf Jun 3 00:28:34.850: INFO: got data: { "image": "nautilus.jpg" } Jun 3 00:28:34.850: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 00:28:34.850: INFO: update-demo-nautilus-kjcdf is verified up and running STEP: scaling down the replication controller Jun 3 00:28:34.852: INFO: scanned /root for discovery docs: Jun 3 00:28:34.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2991' Jun 3 00:28:36.046: INFO: stderr: "" Jun 3 00:28:36.046: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 3 00:28:36.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2991' Jun 3 00:28:36.148: INFO: stderr: "" Jun 3 00:28:36.148: INFO: stdout: "update-demo-nautilus-jn5pq update-demo-nautilus-kjcdf " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 3 00:28:41.148: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2991' Jun 3 00:28:41.268: INFO: stderr: "" Jun 3 00:28:41.268: INFO: stdout: "update-demo-nautilus-jn5pq update-demo-nautilus-kjcdf " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 3 00:28:46.269: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2991' Jun 3 00:28:46.379: INFO: stderr: "" Jun 3 00:28:46.379: INFO: stdout: "update-demo-nautilus-kjcdf " Jun 3 00:28:46.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kjcdf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2991' Jun 3 00:28:46.468: INFO: stderr: "" Jun 3 00:28:46.468: INFO: stdout: "true" Jun 3 00:28:46.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kjcdf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2991' Jun 3 00:28:46.559: INFO: stderr: "" Jun 3 00:28:46.559: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 3 00:28:46.559: INFO: validating pod update-demo-nautilus-kjcdf Jun 3 00:28:46.562: INFO: got data: { "image": "nautilus.jpg" } Jun 3 00:28:46.562: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 00:28:46.562: INFO: update-demo-nautilus-kjcdf is verified up and running STEP: scaling up the replication controller Jun 3 00:28:46.566: INFO: scanned /root for discovery docs: Jun 3 00:28:46.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2991' Jun 3 00:28:47.691: INFO: stderr: "" Jun 3 00:28:47.691: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 3 00:28:47.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2991' Jun 3 00:28:47.794: INFO: stderr: "" Jun 3 00:28:47.794: INFO: stdout: "update-demo-nautilus-kjcdf update-demo-nautilus-wwb68 " Jun 3 00:28:47.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kjcdf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2991' Jun 3 00:28:47.961: INFO: stderr: "" Jun 3 00:28:47.961: INFO: stdout: "true" Jun 3 00:28:47.961: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kjcdf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2991' Jun 3 00:28:48.057: INFO: stderr: "" Jun 3 00:28:48.057: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 3 00:28:48.057: INFO: validating pod update-demo-nautilus-kjcdf Jun 3 00:28:48.060: INFO: got data: { "image": "nautilus.jpg" } Jun 3 00:28:48.060: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 00:28:48.060: INFO: update-demo-nautilus-kjcdf is verified up and running Jun 3 00:28:48.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wwb68 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2991' Jun 3 00:28:48.157: INFO: stderr: "" Jun 3 00:28:48.157: INFO: stdout: "" Jun 3 00:28:48.157: INFO: update-demo-nautilus-wwb68 is created but not running Jun 3 00:28:53.157: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2991' Jun 3 00:28:53.261: INFO: stderr: "" Jun 3 00:28:53.261: INFO: stdout: "update-demo-nautilus-kjcdf update-demo-nautilus-wwb68 " Jun 3 00:28:53.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kjcdf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2991' Jun 3 00:28:53.350: INFO: stderr: "" Jun 3 00:28:53.350: INFO: stdout: "true" Jun 3 00:28:53.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kjcdf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2991' Jun 3 00:28:53.446: INFO: stderr: "" Jun 3 00:28:53.446: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 3 00:28:53.446: INFO: validating pod update-demo-nautilus-kjcdf Jun 3 00:28:53.449: INFO: got data: { "image": "nautilus.jpg" } Jun 3 00:28:53.449: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 00:28:53.449: INFO: update-demo-nautilus-kjcdf is verified up and running Jun 3 00:28:53.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wwb68 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2991' Jun 3 00:28:53.554: INFO: stderr: "" Jun 3 00:28:53.554: INFO: stdout: "true" Jun 3 00:28:53.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wwb68 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2991' Jun 3 00:28:53.656: INFO: stderr: "" Jun 3 00:28:53.656: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 3 00:28:53.656: INFO: validating pod update-demo-nautilus-wwb68 Jun 3 00:28:53.660: INFO: got data: { "image": "nautilus.jpg" } Jun 3 00:28:53.660: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 00:28:53.660: INFO: update-demo-nautilus-wwb68 is verified up and running STEP: using delete to clean up resources Jun 3 00:28:53.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2991' Jun 3 00:28:53.774: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 00:28:53.774: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 3 00:28:53.775: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2991' Jun 3 00:28:53.938: INFO: stderr: "No resources found in kubectl-2991 namespace.\n" Jun 3 00:28:53.938: INFO: stdout: "" Jun 3 00:28:53.938: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2991 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 3 00:28:54.108: INFO: stderr: "" Jun 3 00:28:54.108: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:28:54.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2991" for this suite. • [SLOW TEST:25.470 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":187,"skipped":3082,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:28:54.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jun 3 00:28:59.026: INFO: Successfully updated pod "annotationupdate790f89f1-e8ce-451a-8e78-07e0825021a0" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:29:01.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9180" for this suite. • [SLOW TEST:6.921 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":188,"skipped":3088,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:29:01.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:29:01.128: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8550 I0603 00:29:01.182407 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8550, replica count: 1 I0603 00:29:02.232905 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 00:29:03.233306 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 00:29:04.233593 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 00:29:04.363: INFO: Created: latency-svc-khcfc Jun 3 00:29:04.374: INFO: Got endpoints: latency-svc-khcfc [40.598712ms] Jun 3 00:29:04.405: INFO: Created: latency-svc-nsw9c Jun 3 00:29:04.422: INFO: Got endpoints: latency-svc-nsw9c [48.351915ms] Jun 3 00:29:04.495: INFO: Created: latency-svc-sghj5 Jun 3 00:29:04.525: INFO: Got endpoints: latency-svc-sghj5 [150.486196ms] Jun 3 00:29:04.525: INFO: Created: latency-svc-lj98m Jun 3 00:29:04.567: INFO: Got endpoints: latency-svc-lj98m [192.908604ms] Jun 3 00:29:04.644: INFO: Created: latency-svc-mjfhl Jun 3 00:29:04.648: INFO: Got endpoints: latency-svc-mjfhl [273.494336ms] Jun 3 00:29:04.692: INFO: Created: latency-svc-vldcp Jun 3 00:29:04.707: INFO: Got endpoints: latency-svc-vldcp [332.522965ms] Jun 3 00:29:04.794: INFO: Created: latency-svc-nc886 Jun 3 00:29:04.800: INFO: Got endpoints: latency-svc-nc886 [426.021122ms] Jun 3 00:29:04.830: INFO: Created: latency-svc-hmql6 Jun 3 00:29:04.842: INFO: Got endpoints: latency-svc-hmql6 [468.279335ms] Jun 3 00:29:04.861: INFO: Created: latency-svc-qm7m5 Jun 3 00:29:04.874: INFO: Got endpoints: latency-svc-qm7m5 [499.551325ms] Jun 3 00:29:04.937: INFO: Created: latency-svc-s6lc9 Jun 3 00:29:04.950: INFO: Got endpoints: latency-svc-s6lc9 [575.655712ms] Jun 3 00:29:04.975: INFO: Created: latency-svc-lj7hn Jun 3 00:29:04.999: INFO: Got endpoints: latency-svc-lj7hn [624.339299ms] Jun 3 00:29:05.029: INFO: Created: latency-svc-4wc4p Jun 3 00:29:05.070: INFO: Got endpoints: latency-svc-4wc4p [695.454522ms] Jun 3 00:29:05.095: INFO: Created: latency-svc-s2xsp Jun 3 00:29:05.120: INFO: Got endpoints: latency-svc-s2xsp [746.072292ms] Jun 3 00:29:05.142: INFO: Created: latency-svc-zqd98 Jun 3 00:29:05.166: INFO: Got endpoints: latency-svc-zqd98 [791.728176ms] Jun 3 00:29:05.210: INFO: Created: latency-svc-l7q79 Jun 3 00:29:05.222: INFO: Got endpoints: latency-svc-l7q79 [847.855614ms] Jun 3 00:29:05.244: INFO: Created: latency-svc-9bvch Jun 3 00:29:05.259: INFO: Got endpoints: latency-svc-9bvch [884.638361ms] Jun 3 00:29:05.299: INFO: Created: latency-svc-nml9d Jun 3 00:29:05.363: INFO: Got endpoints: latency-svc-nml9d [940.207432ms] Jun 3 00:29:05.382: INFO: Created: latency-svc-nf5c5 Jun 3 00:29:05.424: INFO: Got endpoints: latency-svc-nf5c5 [899.389319ms] Jun 3 00:29:05.519: INFO: Created: latency-svc-2mxw5 Jun 3 00:29:05.524: INFO: Got endpoints: latency-svc-2mxw5 [956.62655ms] Jun 3 00:29:05.586: INFO: Created: latency-svc-gcd5z Jun 3 00:29:05.608: INFO: Got endpoints: latency-svc-gcd5z [960.022477ms] Jun 3 00:29:05.695: INFO: Created: latency-svc-6p92z Jun 3 00:29:05.711: INFO: Got endpoints: latency-svc-6p92z [1.004051452s] Jun 3 00:29:05.730: INFO: Created: latency-svc-hhrvh Jun 3 00:29:05.740: INFO: Got endpoints: latency-svc-hhrvh [939.485507ms] Jun 3 00:29:05.761: INFO: Created: latency-svc-wmzk8 Jun 3 00:29:05.813: INFO: Got endpoints: latency-svc-wmzk8 [970.864446ms] Jun 3 00:29:05.875: INFO: Created: latency-svc-hptc2 Jun 3 00:29:05.897: INFO: Got endpoints: latency-svc-hptc2 [1.023703219s] Jun 3 00:29:05.982: INFO: Created: latency-svc-2j629 Jun 3 00:29:05.987: INFO: Got endpoints: latency-svc-2j629 [1.037073391s] Jun 3 00:29:06.012: INFO: Created: latency-svc-fkvdf Jun 3 00:29:06.023: INFO: Got endpoints: latency-svc-fkvdf [1.024198821s] Jun 3 00:29:06.048: INFO: Created: latency-svc-zmzg6 Jun 3 00:29:06.060: INFO: Got endpoints: latency-svc-zmzg6 [989.666377ms] Jun 3 00:29:06.183: INFO: Created: latency-svc-rz66f Jun 3 00:29:06.189: INFO: Got endpoints: latency-svc-rz66f [1.068719296s] Jun 3 00:29:06.229: INFO: Created: latency-svc-b4mck Jun 3 00:29:06.264: INFO: Got endpoints: latency-svc-b4mck [1.098085388s] Jun 3 00:29:06.351: INFO: Created: latency-svc-mlbw6 Jun 3 00:29:06.360: INFO: Got endpoints: latency-svc-mlbw6 [1.137992021s] Jun 3 00:29:06.391: INFO: Created: latency-svc-wrwmr Jun 3 00:29:06.402: INFO: Got endpoints: latency-svc-wrwmr [1.143669657s] Jun 3 00:29:06.420: INFO: Created: latency-svc-zwqfj Jun 3 00:29:06.433: INFO: Got endpoints: latency-svc-zwqfj [1.070088453s] Jun 3 00:29:06.451: INFO: Created: latency-svc-4bmlb Jun 3 00:29:06.570: INFO: Got endpoints: latency-svc-4bmlb [1.145991489s] Jun 3 00:29:06.570: INFO: Created: latency-svc-77w74 Jun 3 00:29:06.618: INFO: Got endpoints: latency-svc-77w74 [1.094519012s] Jun 3 00:29:06.708: INFO: Created: latency-svc-h5pzm Jun 3 00:29:06.734: INFO: Got endpoints: latency-svc-h5pzm [1.126003601s] Jun 3 00:29:06.762: INFO: Created: latency-svc-7cg6z Jun 3 00:29:06.776: INFO: Got endpoints: latency-svc-7cg6z [1.064527079s] Jun 3 00:29:06.816: INFO: Created: latency-svc-fwf4z Jun 3 00:29:06.840: INFO: Got endpoints: latency-svc-fwf4z [1.100028628s] Jun 3 00:29:06.877: INFO: Created: latency-svc-d8t5c Jun 3 00:29:06.890: INFO: Got endpoints: latency-svc-d8t5c [1.077073628s] Jun 3 00:29:06.976: INFO: Created: latency-svc-kt757 Jun 3 00:29:06.981: INFO: Got endpoints: latency-svc-kt757 [1.083875353s] Jun 3 00:29:07.012: INFO: Created: latency-svc-qssfk Jun 3 00:29:07.023: INFO: Got endpoints: latency-svc-qssfk [1.036165356s] Jun 3 00:29:07.050: INFO: Created: latency-svc-gzdkz Jun 3 00:29:07.148: INFO: Got endpoints: latency-svc-gzdkz [1.124865996s] Jun 3 00:29:07.215: INFO: Created: latency-svc-dmcxm Jun 3 00:29:07.227: INFO: Got endpoints: latency-svc-dmcxm [1.16774333s] Jun 3 00:29:07.285: INFO: Created: latency-svc-gdv28 Jun 3 00:29:07.288: INFO: Got endpoints: latency-svc-gdv28 [1.098798706s] Jun 3 00:29:07.336: INFO: Created: latency-svc-nx8sj Jun 3 00:29:07.362: INFO: Got endpoints: latency-svc-nx8sj [1.098334076s] Jun 3 00:29:07.429: INFO: Created: latency-svc-dhdvv Jun 3 00:29:07.439: INFO: Got endpoints: latency-svc-dhdvv [1.07839602s] Jun 3 00:29:07.464: INFO: Created: latency-svc-bfll5 Jun 3 00:29:07.475: INFO: Got endpoints: latency-svc-bfll5 [1.072086849s] Jun 3 00:29:07.500: INFO: Created: latency-svc-4lc6m Jun 3 00:29:07.524: INFO: Got endpoints: latency-svc-4lc6m [1.090920826s] Jun 3 00:29:07.612: INFO: Created: latency-svc-s5bnw Jun 3 00:29:07.620: INFO: Got endpoints: latency-svc-s5bnw [1.049767039s] Jun 3 00:29:07.651: INFO: Created: latency-svc-blh9j Jun 3 00:29:07.680: INFO: Got endpoints: latency-svc-blh9j [1.061815586s] Jun 3 00:29:07.776: INFO: Created: latency-svc-8c2wz Jun 3 00:29:07.789: INFO: Got endpoints: latency-svc-8c2wz [1.054952554s] Jun 3 00:29:07.812: INFO: Created: latency-svc-9zxbg Jun 3 00:29:07.843: INFO: Got endpoints: latency-svc-9zxbg [1.06698284s] Jun 3 00:29:07.927: INFO: Created: latency-svc-ggfvk Jun 3 00:29:07.941: INFO: Got endpoints: latency-svc-ggfvk [1.101453242s] Jun 3 00:29:07.962: INFO: Created: latency-svc-nzc9s Jun 3 00:29:07.976: INFO: Got endpoints: latency-svc-nzc9s [1.085133846s] Jun 3 00:29:08.016: INFO: Created: latency-svc-swsdr Jun 3 00:29:08.094: INFO: Got endpoints: latency-svc-swsdr [1.112998912s] Jun 3 00:29:08.124: INFO: Created: latency-svc-zxj4d Jun 3 00:29:08.131: INFO: Got endpoints: latency-svc-zxj4d [1.107707687s] Jun 3 00:29:08.172: INFO: Created: latency-svc-gmrzs Jun 3 00:29:08.185: INFO: Got endpoints: latency-svc-gmrzs [1.037616224s] Jun 3 00:29:08.262: INFO: Created: latency-svc-rj2pz Jun 3 00:29:08.298: INFO: Got endpoints: latency-svc-rj2pz [1.071129581s] Jun 3 00:29:08.335: INFO: Created: latency-svc-6z8hq Jun 3 00:29:08.348: INFO: Got endpoints: latency-svc-6z8hq [1.060089047s] Jun 3 00:29:08.411: INFO: Created: latency-svc-5spts Jun 3 00:29:08.416: INFO: Got endpoints: latency-svc-5spts [1.053869412s] Jun 3 00:29:08.447: INFO: Created: latency-svc-8pd2w Jun 3 00:29:08.463: INFO: Got endpoints: latency-svc-8pd2w [1.024377191s] Jun 3 00:29:08.484: INFO: Created: latency-svc-qzrkv Jun 3 00:29:08.499: INFO: Got endpoints: latency-svc-qzrkv [1.024154009s] Jun 3 00:29:08.585: INFO: Created: latency-svc-9swlx Jun 3 00:29:08.595: INFO: Got endpoints: latency-svc-9swlx [1.070901826s] Jun 3 00:29:08.616: INFO: Created: latency-svc-skt44 Jun 3 00:29:08.659: INFO: Got endpoints: latency-svc-skt44 [1.038855712s] Jun 3 00:29:08.746: INFO: Created: latency-svc-whp4k Jun 3 00:29:08.750: INFO: Got endpoints: latency-svc-whp4k [1.070150536s] Jun 3 00:29:08.790: INFO: Created: latency-svc-2d599 Jun 3 00:29:08.794: INFO: Got endpoints: latency-svc-2d599 [1.00534385s] Jun 3 00:29:08.845: INFO: Created: latency-svc-9cjvw Jun 3 00:29:09.148: INFO: Got endpoints: latency-svc-9cjvw [1.305309837s] Jun 3 00:29:09.375: INFO: Created: latency-svc-gggz6 Jun 3 00:29:09.408: INFO: Got endpoints: latency-svc-gggz6 [1.466584656s] Jun 3 00:29:09.438: INFO: Created: latency-svc-9jgt7 Jun 3 00:29:09.455: INFO: Got endpoints: latency-svc-9jgt7 [1.478972637s] Jun 3 00:29:09.474: INFO: Created: latency-svc-fx7xr Jun 3 00:29:09.554: INFO: Got endpoints: latency-svc-fx7xr [1.459627569s] Jun 3 00:29:09.840: INFO: Created: latency-svc-k994c Jun 3 00:29:09.986: INFO: Got endpoints: latency-svc-k994c [1.855533703s] Jun 3 00:29:10.014: INFO: Created: latency-svc-hhdwk Jun 3 00:29:10.044: INFO: Got endpoints: latency-svc-hhdwk [1.858189291s] Jun 3 00:29:10.135: INFO: Created: latency-svc-gb76k Jun 3 00:29:10.170: INFO: Got endpoints: latency-svc-gb76k [1.87176261s] Jun 3 00:29:10.170: INFO: Created: latency-svc-zglx6 Jun 3 00:29:10.206: INFO: Got endpoints: latency-svc-zglx6 [1.858106416s] Jun 3 00:29:10.234: INFO: Created: latency-svc-hjhxb Jun 3 00:29:10.279: INFO: Got endpoints: latency-svc-hjhxb [1.862657684s] Jun 3 00:29:10.302: INFO: Created: latency-svc-z9sjv Jun 3 00:29:10.314: INFO: Got endpoints: latency-svc-z9sjv [1.850742949s] Jun 3 00:29:10.333: INFO: Created: latency-svc-h292r Jun 3 00:29:10.375: INFO: Got endpoints: latency-svc-h292r [1.875724722s] Jun 3 00:29:10.422: INFO: Created: latency-svc-72l52 Jun 3 00:29:10.426: INFO: Got endpoints: latency-svc-72l52 [1.830876871s] Jun 3 00:29:10.451: INFO: Created: latency-svc-lvrzk Jun 3 00:29:10.465: INFO: Got endpoints: latency-svc-lvrzk [1.8061131s] Jun 3 00:29:10.482: INFO: Created: latency-svc-gq5dv Jun 3 00:29:10.501: INFO: Got endpoints: latency-svc-gq5dv [1.750432598s] Jun 3 00:29:10.562: INFO: Created: latency-svc-4psrd Jun 3 00:29:10.585: INFO: Got endpoints: latency-svc-4psrd [1.79042403s] Jun 3 00:29:10.626: INFO: Created: latency-svc-zs58l Jun 3 00:29:10.658: INFO: Got endpoints: latency-svc-zs58l [1.51007953s] Jun 3 00:29:10.746: INFO: Created: latency-svc-qbcck Jun 3 00:29:10.749: INFO: Got endpoints: latency-svc-qbcck [1.341186549s] Jun 3 00:29:10.798: INFO: Created: latency-svc-mfr6n Jun 3 00:29:10.816: INFO: Got endpoints: latency-svc-mfr6n [1.360960242s] Jun 3 00:29:10.901: INFO: Created: latency-svc-z87vp Jun 3 00:29:10.926: INFO: Got endpoints: latency-svc-z87vp [1.37224447s] Jun 3 00:29:10.956: INFO: Created: latency-svc-225gs Jun 3 00:29:10.971: INFO: Got endpoints: latency-svc-225gs [984.127062ms] Jun 3 00:29:10.992: INFO: Created: latency-svc-x7rvb Jun 3 00:29:11.039: INFO: Got endpoints: latency-svc-x7rvb [995.713389ms] Jun 3 00:29:11.046: INFO: Created: latency-svc-l2c2l Jun 3 00:29:11.094: INFO: Got endpoints: latency-svc-l2c2l [923.863604ms] Jun 3 00:29:11.177: INFO: Created: latency-svc-nsz74 Jun 3 00:29:11.196: INFO: Created: latency-svc-79rjs Jun 3 00:29:11.197: INFO: Got endpoints: latency-svc-nsz74 [990.8989ms] Jun 3 00:29:11.219: INFO: Got endpoints: latency-svc-79rjs [940.302159ms] Jun 3 00:29:11.251: INFO: Created: latency-svc-rjvk6 Jun 3 00:29:11.259: INFO: Got endpoints: latency-svc-rjvk6 [945.283757ms] Jun 3 00:29:11.344: INFO: Created: latency-svc-9cdvc Jun 3 00:29:11.361: INFO: Got endpoints: latency-svc-9cdvc [986.591004ms] Jun 3 00:29:11.382: INFO: Created: latency-svc-rvm25 Jun 3 00:29:11.398: INFO: Got endpoints: latency-svc-rvm25 [972.625737ms] Jun 3 00:29:11.419: INFO: Created: latency-svc-t4fmm Jun 3 00:29:11.434: INFO: Got endpoints: latency-svc-t4fmm [969.348925ms] Jun 3 00:29:11.489: INFO: Created: latency-svc-cm8js Jun 3 00:29:11.514: INFO: Got endpoints: latency-svc-cm8js [1.013571945s] Jun 3 00:29:11.515: INFO: Created: latency-svc-gs96t Jun 3 00:29:11.538: INFO: Got endpoints: latency-svc-gs96t [953.445605ms] Jun 3 00:29:11.562: INFO: Created: latency-svc-gndb7 Jun 3 00:29:11.658: INFO: Got endpoints: latency-svc-gndb7 [1.000324071s] Jun 3 00:29:11.688: INFO: Created: latency-svc-rg89l Jun 3 00:29:11.700: INFO: Got endpoints: latency-svc-rg89l [950.35203ms] Jun 3 00:29:11.730: INFO: Created: latency-svc-6599p Jun 3 00:29:11.742: INFO: Got endpoints: latency-svc-6599p [926.455976ms] Jun 3 00:29:11.814: INFO: Created: latency-svc-7swp4 Jun 3 00:29:11.818: INFO: Got endpoints: latency-svc-7swp4 [891.194163ms] Jun 3 00:29:11.857: INFO: Created: latency-svc-7gskk Jun 3 00:29:11.910: INFO: Got endpoints: latency-svc-7gskk [939.041245ms] Jun 3 00:29:11.974: INFO: Created: latency-svc-s8g88 Jun 3 00:29:11.983: INFO: Got endpoints: latency-svc-s8g88 [943.19383ms] Jun 3 00:29:12.012: INFO: Created: latency-svc-mlgvs Jun 3 00:29:12.043: INFO: Got endpoints: latency-svc-mlgvs [948.484878ms] Jun 3 00:29:12.135: INFO: Created: latency-svc-r4zqx Jun 3 00:29:12.151: INFO: Got endpoints: latency-svc-r4zqx [953.823137ms] Jun 3 00:29:12.191: INFO: Created: latency-svc-qx6rt Jun 3 00:29:12.215: INFO: Got endpoints: latency-svc-qx6rt [996.079009ms] Jun 3 00:29:12.273: INFO: Created: latency-svc-fbrrf Jun 3 00:29:12.300: INFO: Got endpoints: latency-svc-fbrrf [1.040343408s] Jun 3 00:29:12.330: INFO: Created: latency-svc-ghtzg Jun 3 00:29:12.344: INFO: Got endpoints: latency-svc-ghtzg [982.276901ms] Jun 3 00:29:12.360: INFO: Created: latency-svc-5lznn Jun 3 00:29:12.422: INFO: Got endpoints: latency-svc-5lznn [1.024147116s] Jun 3 00:29:12.424: INFO: Created: latency-svc-wthrv Jun 3 00:29:12.434: INFO: Got endpoints: latency-svc-wthrv [999.662742ms] Jun 3 00:29:12.462: INFO: Created: latency-svc-d2262 Jun 3 00:29:12.498: INFO: Got endpoints: latency-svc-d2262 [983.744963ms] Jun 3 00:29:12.572: INFO: Created: latency-svc-d47bl Jun 3 00:29:12.576: INFO: Got endpoints: latency-svc-d47bl [1.037444504s] Jun 3 00:29:12.647: INFO: Created: latency-svc-2cldb Jun 3 00:29:12.659: INFO: Got endpoints: latency-svc-2cldb [1.000028605s] Jun 3 00:29:12.740: INFO: Created: latency-svc-2ffj9 Jun 3 00:29:12.745: INFO: Got endpoints: latency-svc-2ffj9 [1.044953997s] Jun 3 00:29:12.774: INFO: Created: latency-svc-9gwnd Jun 3 00:29:12.784: INFO: Got endpoints: latency-svc-9gwnd [1.042268386s] Jun 3 00:29:12.920: INFO: Created: latency-svc-722gx Jun 3 00:29:12.928: INFO: Got endpoints: latency-svc-722gx [1.110530017s] Jun 3 00:29:12.948: INFO: Created: latency-svc-nnk98 Jun 3 00:29:12.958: INFO: Got endpoints: latency-svc-nnk98 [1.047836188s] Jun 3 00:29:12.972: INFO: Created: latency-svc-p6j8g Jun 3 00:29:12.989: INFO: Got endpoints: latency-svc-p6j8g [1.006486813s] Jun 3 00:29:13.020: INFO: Created: latency-svc-x7qwr Jun 3 00:29:13.097: INFO: Created: latency-svc-7fwll Jun 3 00:29:13.116: INFO: Got endpoints: latency-svc-7fwll [965.355116ms] Jun 3 00:29:13.117: INFO: Got endpoints: latency-svc-x7qwr [1.073981339s] Jun 3 00:29:13.152: INFO: Created: latency-svc-vbvmq Jun 3 00:29:13.168: INFO: Got endpoints: latency-svc-vbvmq [952.179297ms] Jun 3 00:29:13.188: INFO: Created: latency-svc-64v5c Jun 3 00:29:13.257: INFO: Created: latency-svc-kj2jn Jun 3 00:29:13.257: INFO: Got endpoints: latency-svc-64v5c [957.775071ms] Jun 3 00:29:13.278: INFO: Got endpoints: latency-svc-kj2jn [934.358199ms] Jun 3 00:29:13.314: INFO: Created: latency-svc-rb66b Jun 3 00:29:13.325: INFO: Got endpoints: latency-svc-rb66b [902.67362ms] Jun 3 00:29:13.416: INFO: Created: latency-svc-dhq54 Jun 3 00:29:13.420: INFO: Got endpoints: latency-svc-dhq54 [985.56248ms] Jun 3 00:29:13.451: INFO: Created: latency-svc-8zd5q Jun 3 00:29:13.463: INFO: Got endpoints: latency-svc-8zd5q [964.633019ms] Jun 3 00:29:13.482: INFO: Created: latency-svc-wkt5g Jun 3 00:29:13.493: INFO: Got endpoints: latency-svc-wkt5g [917.498735ms] Jun 3 00:29:13.512: INFO: Created: latency-svc-gffgn Jun 3 00:29:13.579: INFO: Got endpoints: latency-svc-gffgn [920.170191ms] Jun 3 00:29:13.580: INFO: Created: latency-svc-zgpgk Jun 3 00:29:13.590: INFO: Got endpoints: latency-svc-zgpgk [845.030599ms] Jun 3 00:29:13.638: INFO: Created: latency-svc-lrxsr Jun 3 00:29:13.746: INFO: Got endpoints: latency-svc-lrxsr [961.654899ms] Jun 3 00:29:13.750: INFO: Created: latency-svc-m8t4g Jun 3 00:29:13.759: INFO: Got endpoints: latency-svc-m8t4g [830.503562ms] Jun 3 00:29:13.782: INFO: Created: latency-svc-wn6rx Jun 3 00:29:13.795: INFO: Got endpoints: latency-svc-wn6rx [837.507393ms] Jun 3 00:29:13.813: INFO: Created: latency-svc-nmhnd Jun 3 00:29:13.826: INFO: Got endpoints: latency-svc-nmhnd [836.310279ms] Jun 3 00:29:13.926: INFO: Created: latency-svc-d9k69 Jun 3 00:29:13.949: INFO: Got endpoints: latency-svc-d9k69 [832.074722ms] Jun 3 00:29:13.953: INFO: Created: latency-svc-zrz9m Jun 3 00:29:13.973: INFO: Got endpoints: latency-svc-zrz9m [856.575136ms] Jun 3 00:29:13.997: INFO: Created: latency-svc-4rdjt Jun 3 00:29:14.013: INFO: Got endpoints: latency-svc-4rdjt [844.798745ms] Jun 3 00:29:14.071: INFO: Created: latency-svc-2qp72 Jun 3 00:29:14.148: INFO: Got endpoints: latency-svc-2qp72 [890.218198ms] Jun 3 00:29:14.148: INFO: Created: latency-svc-fxcxh Jun 3 00:29:14.225: INFO: Got endpoints: latency-svc-fxcxh [946.882687ms] Jun 3 00:29:14.243: INFO: Created: latency-svc-dlvzj Jun 3 00:29:14.258: INFO: Got endpoints: latency-svc-dlvzj [929.445985ms] Jun 3 00:29:14.279: INFO: Created: latency-svc-gzxgx Jun 3 00:29:14.294: INFO: Got endpoints: latency-svc-gzxgx [873.929628ms] Jun 3 00:29:14.310: INFO: Created: latency-svc-nfxvp Jun 3 00:29:14.328: INFO: Got endpoints: latency-svc-nfxvp [865.354174ms] Jun 3 00:29:14.380: INFO: Created: latency-svc-pgvsb Jun 3 00:29:14.390: INFO: Got endpoints: latency-svc-pgvsb [897.113856ms] Jun 3 00:29:14.411: INFO: Created: latency-svc-m2vfh Jun 3 00:29:14.427: INFO: Got endpoints: latency-svc-m2vfh [848.297181ms] Jun 3 00:29:14.478: INFO: Created: latency-svc-dkbdn Jun 3 00:29:14.543: INFO: Got endpoints: latency-svc-dkbdn [952.770386ms] Jun 3 00:29:14.556: INFO: Created: latency-svc-nn94w Jun 3 00:29:14.592: INFO: Got endpoints: latency-svc-nn94w [845.484197ms] Jun 3 00:29:14.628: INFO: Created: latency-svc-9d99z Jun 3 00:29:14.747: INFO: Got endpoints: latency-svc-9d99z [987.796343ms] Jun 3 00:29:14.750: INFO: Created: latency-svc-2kj22 Jun 3 00:29:14.759: INFO: Got endpoints: latency-svc-2kj22 [963.348122ms] Jun 3 00:29:14.825: INFO: Created: latency-svc-prwzb Jun 3 00:29:14.842: INFO: Got endpoints: latency-svc-prwzb [1.016717285s] Jun 3 00:29:14.902: INFO: Created: latency-svc-dwvqb Jun 3 00:29:14.910: INFO: Got endpoints: latency-svc-dwvqb [961.163128ms] Jun 3 00:29:14.934: INFO: Created: latency-svc-567g4 Jun 3 00:29:14.975: INFO: Got endpoints: latency-svc-567g4 [1.002222045s] Jun 3 00:29:15.052: INFO: Created: latency-svc-58ss9 Jun 3 00:29:15.077: INFO: Got endpoints: latency-svc-58ss9 [1.064305843s] Jun 3 00:29:15.101: INFO: Created: latency-svc-xs4jn Jun 3 00:29:15.114: INFO: Got endpoints: latency-svc-xs4jn [966.441238ms] Jun 3 00:29:15.213: INFO: Created: latency-svc-5tlbn Jun 3 00:29:15.247: INFO: Got endpoints: latency-svc-5tlbn [1.022087568s] Jun 3 00:29:15.263: INFO: Created: latency-svc-ntb2b Jun 3 00:29:15.276: INFO: Got endpoints: latency-svc-ntb2b [1.018418642s] Jun 3 00:29:15.293: INFO: Created: latency-svc-426gc Jun 3 00:29:15.306: INFO: Got endpoints: latency-svc-426gc [1.012483656s] Jun 3 00:29:15.395: INFO: Created: latency-svc-x4sdm Jun 3 00:29:15.425: INFO: Got endpoints: latency-svc-x4sdm [1.096714979s] Jun 3 00:29:15.426: INFO: Created: latency-svc-psbx7 Jun 3 00:29:15.450: INFO: Got endpoints: latency-svc-psbx7 [1.059181455s] Jun 3 00:29:15.548: INFO: Created: latency-svc-skm2x Jun 3 00:29:15.560: INFO: Got endpoints: latency-svc-skm2x [1.133317118s] Jun 3 00:29:15.599: INFO: Created: latency-svc-nf7lr Jun 3 00:29:15.609: INFO: Got endpoints: latency-svc-nf7lr [1.065802655s] Jun 3 00:29:15.636: INFO: Created: latency-svc-fpxz2 Jun 3 00:29:15.716: INFO: Got endpoints: latency-svc-fpxz2 [1.124132012s] Jun 3 00:29:15.732: INFO: Created: latency-svc-fpbdl Jun 3 00:29:15.753: INFO: Got endpoints: latency-svc-fpbdl [1.006009503s] Jun 3 00:29:15.783: INFO: Created: latency-svc-2hngp Jun 3 00:29:15.796: INFO: Got endpoints: latency-svc-2hngp [1.037673482s] Jun 3 00:29:15.815: INFO: Created: latency-svc-z2zsk Jun 3 00:29:15.883: INFO: Got endpoints: latency-svc-z2zsk [1.041130692s] Jun 3 00:29:15.885: INFO: Created: latency-svc-82kv6 Jun 3 00:29:15.905: INFO: Got endpoints: latency-svc-82kv6 [995.186681ms] Jun 3 00:29:16.010: INFO: Created: latency-svc-rwgql Jun 3 00:29:16.022: INFO: Got endpoints: latency-svc-rwgql [1.046345884s] Jun 3 00:29:16.043: INFO: Created: latency-svc-9hhqw Jun 3 00:29:16.054: INFO: Got endpoints: latency-svc-9hhqw [977.033292ms] Jun 3 00:29:16.165: INFO: Created: latency-svc-g7phf Jun 3 00:29:16.170: INFO: Got endpoints: latency-svc-g7phf [1.055471925s] Jun 3 00:29:16.217: INFO: Created: latency-svc-6tmkc Jun 3 00:29:16.240: INFO: Got endpoints: latency-svc-6tmkc [993.302754ms] Jun 3 00:29:16.259: INFO: Created: latency-svc-xqfzh Jun 3 00:29:16.321: INFO: Got endpoints: latency-svc-xqfzh [1.04530645s] Jun 3 00:29:16.343: INFO: Created: latency-svc-7d7d9 Jun 3 00:29:16.355: INFO: Got endpoints: latency-svc-7d7d9 [1.048738106s] Jun 3 00:29:16.373: INFO: Created: latency-svc-6x5tr Jun 3 00:29:16.385: INFO: Got endpoints: latency-svc-6x5tr [960.202456ms] Jun 3 00:29:16.403: INFO: Created: latency-svc-cks4v Jun 3 00:29:16.416: INFO: Got endpoints: latency-svc-cks4v [966.536667ms] Jun 3 00:29:16.483: INFO: Created: latency-svc-plw9z Jun 3 00:29:16.506: INFO: Got endpoints: latency-svc-plw9z [945.179775ms] Jun 3 00:29:16.547: INFO: Created: latency-svc-9lfc7 Jun 3 00:29:16.573: INFO: Got endpoints: latency-svc-9lfc7 [964.132109ms] Jun 3 00:29:16.626: INFO: Created: latency-svc-d48lx Jun 3 00:29:16.632: INFO: Got endpoints: latency-svc-d48lx [915.975863ms] Jun 3 00:29:16.655: INFO: Created: latency-svc-zqvs7 Jun 3 00:29:16.669: INFO: Got endpoints: latency-svc-zqvs7 [915.866164ms] Jun 3 00:29:16.722: INFO: Created: latency-svc-lhngb Jun 3 00:29:16.789: INFO: Got endpoints: latency-svc-lhngb [992.657474ms] Jun 3 00:29:16.791: INFO: Created: latency-svc-rtztf Jun 3 00:29:16.807: INFO: Got endpoints: latency-svc-rtztf [923.199473ms] Jun 3 00:29:16.830: INFO: Created: latency-svc-dp6gg Jun 3 00:29:16.843: INFO: Got endpoints: latency-svc-dp6gg [937.371681ms] Jun 3 00:29:16.865: INFO: Created: latency-svc-ngx8w Jun 3 00:29:16.886: INFO: Got endpoints: latency-svc-ngx8w [864.491362ms] Jun 3 00:29:16.950: INFO: Created: latency-svc-k55gw Jun 3 00:29:16.958: INFO: Got endpoints: latency-svc-k55gw [903.795473ms] Jun 3 00:29:16.979: INFO: Created: latency-svc-vvrrw Jun 3 00:29:16.994: INFO: Got endpoints: latency-svc-vvrrw [824.543894ms] Jun 3 00:29:17.015: INFO: Created: latency-svc-f4f5j Jun 3 00:29:17.024: INFO: Got endpoints: latency-svc-f4f5j [783.73007ms] Jun 3 00:29:17.130: INFO: Created: latency-svc-99tpc Jun 3 00:29:17.144: INFO: Got endpoints: latency-svc-99tpc [822.742182ms] Jun 3 00:29:17.166: INFO: Created: latency-svc-8vk85 Jun 3 00:29:17.181: INFO: Got endpoints: latency-svc-8vk85 [825.912139ms] Jun 3 00:29:17.267: INFO: Created: latency-svc-hnhjr Jun 3 00:29:17.270: INFO: Got endpoints: latency-svc-hnhjr [884.4926ms] Jun 3 00:29:17.297: INFO: Created: latency-svc-pwqbw Jun 3 00:29:17.314: INFO: Got endpoints: latency-svc-pwqbw [897.501187ms] Jun 3 00:29:17.333: INFO: Created: latency-svc-hp665 Jun 3 00:29:17.344: INFO: Got endpoints: latency-svc-hp665 [837.877495ms] Jun 3 00:29:17.411: INFO: Created: latency-svc-4ccnz Jun 3 00:29:17.423: INFO: Got endpoints: latency-svc-4ccnz [849.851473ms] Jun 3 00:29:17.448: INFO: Created: latency-svc-czfps Jun 3 00:29:17.459: INFO: Got endpoints: latency-svc-czfps [826.999907ms] Jun 3 00:29:17.477: INFO: Created: latency-svc-7wk27 Jun 3 00:29:17.489: INFO: Got endpoints: latency-svc-7wk27 [820.159525ms] Jun 3 00:29:17.560: INFO: Created: latency-svc-d8h9s Jun 3 00:29:17.598: INFO: Got endpoints: latency-svc-d8h9s [808.668135ms] Jun 3 00:29:17.615: INFO: Created: latency-svc-bdzgb Jun 3 00:29:17.627: INFO: Got endpoints: latency-svc-bdzgb [820.536334ms] Jun 3 00:29:17.651: INFO: Created: latency-svc-bccr7 Jun 3 00:29:17.747: INFO: Got endpoints: latency-svc-bccr7 [904.396184ms] Jun 3 00:29:17.767: INFO: Created: latency-svc-2lw4l Jun 3 00:29:17.778: INFO: Got endpoints: latency-svc-2lw4l [891.925882ms] Jun 3 00:29:17.795: INFO: Created: latency-svc-4wzl2 Jun 3 00:29:17.815: INFO: Got endpoints: latency-svc-4wzl2 [857.377891ms] Jun 3 00:29:17.831: INFO: Created: latency-svc-5t6zd Jun 3 00:29:17.926: INFO: Got endpoints: latency-svc-5t6zd [931.442681ms] Jun 3 00:29:17.935: INFO: Created: latency-svc-s48cb Jun 3 00:29:17.953: INFO: Got endpoints: latency-svc-s48cb [928.71184ms] Jun 3 00:29:17.975: INFO: Created: latency-svc-cj6dg Jun 3 00:29:17.989: INFO: Got endpoints: latency-svc-cj6dg [845.187964ms] Jun 3 00:29:18.005: INFO: Created: latency-svc-ktcd4 Jun 3 00:29:18.022: INFO: Got endpoints: latency-svc-ktcd4 [841.464083ms] Jun 3 00:29:18.071: INFO: Created: latency-svc-7j45d Jun 3 00:29:18.095: INFO: Got endpoints: latency-svc-7j45d [825.246642ms] Jun 3 00:29:18.131: INFO: Created: latency-svc-f5ncq Jun 3 00:29:18.158: INFO: Got endpoints: latency-svc-f5ncq [844.426422ms] Jun 3 00:29:18.243: INFO: Created: latency-svc-dcvwj Jun 3 00:29:18.245: INFO: Got endpoints: latency-svc-dcvwj [901.840418ms] Jun 3 00:29:18.246: INFO: Latencies: [48.351915ms 150.486196ms 192.908604ms 273.494336ms 332.522965ms 426.021122ms 468.279335ms 499.551325ms 575.655712ms 624.339299ms 695.454522ms 746.072292ms 783.73007ms 791.728176ms 808.668135ms 820.159525ms 820.536334ms 822.742182ms 824.543894ms 825.246642ms 825.912139ms 826.999907ms 830.503562ms 832.074722ms 836.310279ms 837.507393ms 837.877495ms 841.464083ms 844.426422ms 844.798745ms 845.030599ms 845.187964ms 845.484197ms 847.855614ms 848.297181ms 849.851473ms 856.575136ms 857.377891ms 864.491362ms 865.354174ms 873.929628ms 884.4926ms 884.638361ms 890.218198ms 891.194163ms 891.925882ms 897.113856ms 897.501187ms 899.389319ms 901.840418ms 902.67362ms 903.795473ms 904.396184ms 915.866164ms 915.975863ms 917.498735ms 920.170191ms 923.199473ms 923.863604ms 926.455976ms 928.71184ms 929.445985ms 931.442681ms 934.358199ms 937.371681ms 939.041245ms 939.485507ms 940.207432ms 940.302159ms 943.19383ms 945.179775ms 945.283757ms 946.882687ms 948.484878ms 950.35203ms 952.179297ms 952.770386ms 953.445605ms 953.823137ms 956.62655ms 957.775071ms 960.022477ms 960.202456ms 961.163128ms 961.654899ms 963.348122ms 964.132109ms 964.633019ms 965.355116ms 966.441238ms 966.536667ms 969.348925ms 970.864446ms 972.625737ms 977.033292ms 982.276901ms 983.744963ms 984.127062ms 985.56248ms 986.591004ms 987.796343ms 989.666377ms 990.8989ms 992.657474ms 993.302754ms 995.186681ms 995.713389ms 996.079009ms 999.662742ms 1.000028605s 1.000324071s 1.002222045s 1.004051452s 1.00534385s 1.006009503s 1.006486813s 1.012483656s 1.013571945s 1.016717285s 1.018418642s 1.022087568s 1.023703219s 1.024147116s 1.024154009s 1.024198821s 1.024377191s 1.036165356s 1.037073391s 1.037444504s 1.037616224s 1.037673482s 1.038855712s 1.040343408s 1.041130692s 1.042268386s 1.044953997s 1.04530645s 1.046345884s 1.047836188s 1.048738106s 1.049767039s 1.053869412s 1.054952554s 1.055471925s 1.059181455s 1.060089047s 1.061815586s 1.064305843s 1.064527079s 1.065802655s 1.06698284s 1.068719296s 1.070088453s 1.070150536s 1.070901826s 1.071129581s 1.072086849s 1.073981339s 1.077073628s 1.07839602s 1.083875353s 1.085133846s 1.090920826s 1.094519012s 1.096714979s 1.098085388s 1.098334076s 1.098798706s 1.100028628s 1.101453242s 1.107707687s 1.110530017s 1.112998912s 1.124132012s 1.124865996s 1.126003601s 1.133317118s 1.137992021s 1.143669657s 1.145991489s 1.16774333s 1.305309837s 1.341186549s 1.360960242s 1.37224447s 1.459627569s 1.466584656s 1.478972637s 1.51007953s 1.750432598s 1.79042403s 1.8061131s 1.830876871s 1.850742949s 1.855533703s 1.858106416s 1.858189291s 1.862657684s 1.87176261s 1.875724722s] Jun 3 00:29:18.246: INFO: 50 %ile: 987.796343ms Jun 3 00:29:18.246: INFO: 90 %ile: 1.16774333s Jun 3 00:29:18.246: INFO: 99 %ile: 1.87176261s Jun 3 00:29:18.246: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:29:18.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8550" for this suite. • [SLOW TEST:17.195 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":189,"skipped":3147,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:29:18.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-1cf78b95-706d-462a-b300-be80a2da1209 STEP: Creating a pod to test consume secrets Jun 3 00:29:18.403: INFO: Waiting up to 5m0s for pod "pod-secrets-3fe46d92-4311-462a-a1b0-e88d8788ab7d" in namespace "secrets-6452" to be "Succeeded or Failed" Jun 3 00:29:18.409: INFO: Pod "pod-secrets-3fe46d92-4311-462a-a1b0-e88d8788ab7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056306ms Jun 3 00:29:20.414: INFO: Pod "pod-secrets-3fe46d92-4311-462a-a1b0-e88d8788ab7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010688317s Jun 3 00:29:22.419: INFO: Pod "pod-secrets-3fe46d92-4311-462a-a1b0-e88d8788ab7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015268192s STEP: Saw pod success Jun 3 00:29:22.419: INFO: Pod "pod-secrets-3fe46d92-4311-462a-a1b0-e88d8788ab7d" satisfied condition "Succeeded or Failed" Jun 3 00:29:22.422: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-3fe46d92-4311-462a-a1b0-e88d8788ab7d container secret-volume-test: STEP: delete the pod Jun 3 00:29:22.490: INFO: Waiting for pod pod-secrets-3fe46d92-4311-462a-a1b0-e88d8788ab7d to disappear Jun 3 00:29:22.499: INFO: Pod pod-secrets-3fe46d92-4311-462a-a1b0-e88d8788ab7d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:29:22.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6452" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":190,"skipped":3203,"failed":0} ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:29:22.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-568 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Jun 3 00:29:22.586: INFO: Found 0 stateful pods, waiting for 3 Jun 3 00:29:33.000: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 00:29:33.000: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 00:29:33.000: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 3 00:29:42.602: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 00:29:42.603: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 00:29:42.603: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jun 3 00:29:42.661: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 3 00:29:52.776: INFO: Updating stateful set ss2 Jun 3 00:29:52.806: INFO: Waiting for Pod statefulset-568/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jun 3 00:30:03.316: INFO: Found 2 stateful pods, waiting for 3 Jun 3 00:30:13.339: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 00:30:13.340: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 00:30:13.340: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 3 00:30:13.363: INFO: Updating stateful set ss2 Jun 3 00:30:13.390: INFO: Waiting for Pod statefulset-568/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jun 3 00:30:23.397: INFO: Waiting for Pod statefulset-568/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jun 3 00:30:33.419: INFO: Updating stateful set ss2 Jun 3 00:30:33.464: INFO: Waiting for StatefulSet statefulset-568/ss2 to complete update Jun 3 00:30:33.464: INFO: Waiting for Pod statefulset-568/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 3 00:30:43.472: INFO: Deleting all statefulset in ns statefulset-568 Jun 3 00:30:43.475: INFO: Scaling statefulset ss2 to 0 Jun 3 00:31:03.496: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 00:31:03.500: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:31:03.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-568" for this suite. • [SLOW TEST:101.041 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":191,"skipped":3203,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:31:03.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-9990e45e-d7be-418b-850f-76e7145b2d9c [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:31:03.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6216" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":192,"skipped":3215,"failed":0} SS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:31:03.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:31:03.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-3401" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":193,"skipped":3217,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:31:03.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:31:20.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2650" for this suite. • [SLOW TEST:17.218 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":194,"skipped":3223,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:31:20.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:31:21.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-770" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":195,"skipped":3234,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:31:21.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 00:31:21.710: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jun 3 00:31:23.970: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741081, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741081, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741081, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741081, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 00:31:26.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741081, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741081, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741081, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741081, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 00:31:29.010: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:31:29.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5117-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:31:30.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6617" for this suite. STEP: Destroying namespace "webhook-6617-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.307 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":196,"skipped":3260,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:31:30.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 00:31:31.404: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jun 3 00:31:33.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741091, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741091, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741091, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741091, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 00:31:36.451: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jun 3 00:31:40.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-6907 to-be-attached-pod -i -c=container1' Jun 3 00:31:40.626: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:31:40.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6907" for this suite. STEP: Destroying namespace "webhook-6907-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.373 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":197,"skipped":3272,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:31:40.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod Jun 3 00:31:40.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2664' Jun 3 00:31:43.243: INFO: stderr: "" Jun 3 00:31:43.243: INFO: stdout: "pod/pause created\n" Jun 3 00:31:43.243: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 3 00:31:43.243: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2664" to be "running and ready" Jun 3 00:31:43.280: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 37.006874ms Jun 3 00:31:45.284: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041358138s Jun 3 00:31:47.288: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.045056302s Jun 3 00:31:47.288: INFO: Pod "pause" satisfied condition "running and ready" Jun 3 00:31:47.288: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Jun 3 00:31:47.288: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2664' Jun 3 00:31:47.403: INFO: stderr: "" Jun 3 00:31:47.403: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 3 00:31:47.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2664' Jun 3 00:31:47.493: INFO: stderr: "" Jun 3 00:31:47.493: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 3 00:31:47.493: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2664' Jun 3 00:31:47.587: INFO: stderr: "" Jun 3 00:31:47.588: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 3 00:31:47.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2664' Jun 3 00:31:47.693: INFO: stderr: "" Jun 3 00:31:47.693: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources Jun 3 00:31:47.693: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2664' Jun 3 00:31:47.879: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 00:31:47.879: INFO: stdout: "pod \"pause\" force deleted\n" Jun 3 00:31:47.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2664' Jun 3 00:31:47.985: INFO: stderr: "No resources found in kubectl-2664 namespace.\n" Jun 3 00:31:47.985: INFO: stdout: "" Jun 3 00:31:47.985: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2664 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 3 00:31:48.090: INFO: stderr: "" Jun 3 00:31:48.090: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:31:48.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2664" for this suite. • [SLOW TEST:7.333 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":198,"skipped":3315,"failed":0} SSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:31:48.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 3 00:31:52.440: INFO: &Pod{ObjectMeta:{send-events-eed0ea33-1aaa-4ce8-b0b3-0ca7abba1a61 events-843 /api/v1/namespaces/events-843/pods/send-events-eed0ea33-1aaa-4ce8-b0b3-0ca7abba1a61 2724a239-8d1d-4202-854f-22699887a67f 9812229 0 2020-06-03 00:31:48 +0000 UTC map[name:foo time:414980235] map[] [] [] [{e2e.test Update v1 2020-06-03 00:31:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-03 00:31:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.203\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-52jvc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-52jvc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-52jvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 00:31:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 00:31:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 00:31:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 00:31:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.203,StartTime:2020-06-03 00:31:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-03 00:31:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://8bbd95fa4914381fedc3c8540f94d9d4c1da07d7d134c5872fb58bd095db920d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.203,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jun 3 00:31:54.446: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 3 00:31:56.450: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:31:56.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-843" for this suite. • [SLOW TEST:8.391 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":199,"skipped":3321,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:31:56.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-7097 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7097 STEP: creating replication controller externalsvc in namespace services-7097 I0603 00:31:56.744190 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7097, replica count: 2 I0603 00:31:59.794667 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 00:32:02.794933 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jun 3 00:32:02.861: INFO: Creating new exec pod Jun 3 00:32:06.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7097 execpodm45qx -- /bin/sh -x -c nslookup clusterip-service' Jun 3 00:32:07.265: INFO: stderr: "I0603 00:32:07.024139 2473 log.go:172] (0xc000416a50) (0xc000254dc0) Create stream\nI0603 00:32:07.024199 2473 log.go:172] (0xc000416a50) (0xc000254dc0) Stream added, broadcasting: 1\nI0603 00:32:07.026808 2473 log.go:172] (0xc000416a50) Reply frame received for 1\nI0603 00:32:07.026850 2473 log.go:172] (0xc000416a50) (0xc00001c500) Create stream\nI0603 00:32:07.026861 2473 log.go:172] (0xc000416a50) (0xc00001c500) Stream added, broadcasting: 3\nI0603 00:32:07.027835 2473 log.go:172] (0xc000416a50) Reply frame received for 3\nI0603 00:32:07.027876 2473 log.go:172] (0xc000416a50) (0xc00052c140) Create stream\nI0603 00:32:07.027898 2473 log.go:172] (0xc000416a50) (0xc00052c140) Stream added, broadcasting: 5\nI0603 00:32:07.028804 2473 log.go:172] (0xc000416a50) Reply frame received for 5\nI0603 00:32:07.135136 2473 log.go:172] (0xc000416a50) Data frame received for 5\nI0603 00:32:07.135170 2473 log.go:172] (0xc00052c140) (5) Data frame handling\nI0603 00:32:07.135196 2473 log.go:172] (0xc00052c140) (5) Data frame sent\n+ nslookup clusterip-service\nI0603 00:32:07.256964 2473 log.go:172] (0xc000416a50) Data frame received for 3\nI0603 00:32:07.256987 2473 log.go:172] (0xc00001c500) (3) Data frame handling\nI0603 00:32:07.257000 2473 log.go:172] (0xc00001c500) (3) Data frame sent\nI0603 00:32:07.258608 2473 log.go:172] (0xc000416a50) Data frame received for 3\nI0603 00:32:07.258627 2473 log.go:172] (0xc00001c500) (3) Data frame handling\nI0603 00:32:07.258639 2473 log.go:172] (0xc00001c500) (3) Data frame sent\nI0603 00:32:07.259342 2473 log.go:172] (0xc000416a50) Data frame received for 5\nI0603 00:32:07.259389 2473 log.go:172] (0xc00052c140) (5) Data frame handling\nI0603 00:32:07.259424 2473 log.go:172] (0xc000416a50) Data frame received for 3\nI0603 00:32:07.259446 2473 log.go:172] (0xc00001c500) (3) Data frame handling\nI0603 00:32:07.261031 2473 log.go:172] (0xc000416a50) Data frame received for 1\nI0603 00:32:07.261060 2473 log.go:172] (0xc000254dc0) (1) Data frame handling\nI0603 00:32:07.261083 2473 log.go:172] (0xc000254dc0) (1) Data frame sent\nI0603 00:32:07.261099 2473 log.go:172] (0xc000416a50) (0xc000254dc0) Stream removed, broadcasting: 1\nI0603 00:32:07.261306 2473 log.go:172] (0xc000416a50) Go away received\nI0603 00:32:07.261518 2473 log.go:172] (0xc000416a50) (0xc000254dc0) Stream removed, broadcasting: 1\nI0603 00:32:07.261532 2473 log.go:172] (0xc000416a50) (0xc00001c500) Stream removed, broadcasting: 3\nI0603 00:32:07.261539 2473 log.go:172] (0xc000416a50) (0xc00052c140) Stream removed, broadcasting: 5\n" Jun 3 00:32:07.266: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-7097.svc.cluster.local\tcanonical name = externalsvc.services-7097.svc.cluster.local.\nName:\texternalsvc.services-7097.svc.cluster.local\nAddress: 10.107.112.212\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7097, will wait for the garbage collector to delete the pods Jun 3 00:32:07.332: INFO: Deleting ReplicationController externalsvc took: 7.868426ms Jun 3 00:32:07.632: INFO: Terminating ReplicationController externalsvc pods took: 300.23268ms Jun 3 00:32:15.350: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:32:15.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7097" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:18.949 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":200,"skipped":3322,"failed":0} S ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:32:15.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-4140 STEP: creating service affinity-clusterip-transition in namespace services-4140 STEP: creating replication controller affinity-clusterip-transition in namespace services-4140 I0603 00:32:15.580282 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-4140, replica count: 3 I0603 00:32:18.630705 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 00:32:21.631047 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 00:32:24.631348 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 00:32:24.637: INFO: Creating new exec pod Jun 3 00:32:29.656: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4140 execpod-affinityvjj77 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jun 3 00:32:29.921: INFO: stderr: "I0603 00:32:29.797828 2494 log.go:172] (0xc0000e8370) (0xc00014f7c0) Create stream\nI0603 00:32:29.797910 2494 log.go:172] (0xc0000e8370) (0xc00014f7c0) Stream added, broadcasting: 1\nI0603 00:32:29.799796 2494 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0603 00:32:29.799847 2494 log.go:172] (0xc0000e8370) (0xc0005543c0) Create stream\nI0603 00:32:29.799866 2494 log.go:172] (0xc0000e8370) (0xc0005543c0) Stream added, broadcasting: 3\nI0603 00:32:29.800796 2494 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0603 00:32:29.800883 2494 log.go:172] (0xc0000e8370) (0xc000450320) Create stream\nI0603 00:32:29.800904 2494 log.go:172] (0xc0000e8370) (0xc000450320) Stream added, broadcasting: 5\nI0603 00:32:29.802106 2494 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0603 00:32:29.885739 2494 log.go:172] (0xc0000e8370) Data frame received for 5\nI0603 00:32:29.885781 2494 log.go:172] (0xc000450320) (5) Data frame handling\nI0603 00:32:29.885820 2494 log.go:172] (0xc000450320) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0603 00:32:29.912625 2494 log.go:172] (0xc0000e8370) Data frame received for 5\nI0603 00:32:29.912731 2494 log.go:172] (0xc000450320) (5) Data frame handling\nI0603 00:32:29.912777 2494 log.go:172] (0xc000450320) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0603 00:32:29.913102 2494 log.go:172] (0xc0000e8370) Data frame received for 5\nI0603 00:32:29.913378 2494 log.go:172] (0xc000450320) (5) Data frame handling\nI0603 00:32:29.913419 2494 log.go:172] (0xc0000e8370) Data frame received for 3\nI0603 00:32:29.913442 2494 log.go:172] (0xc0005543c0) (3) Data frame handling\nI0603 00:32:29.914856 2494 log.go:172] (0xc0000e8370) Data frame received for 1\nI0603 00:32:29.914875 2494 log.go:172] (0xc00014f7c0) (1) Data frame handling\nI0603 00:32:29.914886 2494 log.go:172] (0xc00014f7c0) (1) Data frame sent\nI0603 00:32:29.914897 2494 log.go:172] (0xc0000e8370) (0xc00014f7c0) Stream removed, broadcasting: 1\nI0603 00:32:29.914952 2494 log.go:172] (0xc0000e8370) Go away received\nI0603 00:32:29.915213 2494 log.go:172] (0xc0000e8370) (0xc00014f7c0) Stream removed, broadcasting: 1\nI0603 00:32:29.915229 2494 log.go:172] (0xc0000e8370) (0xc0005543c0) Stream removed, broadcasting: 3\nI0603 00:32:29.915238 2494 log.go:172] (0xc0000e8370) (0xc000450320) Stream removed, broadcasting: 5\n" Jun 3 00:32:29.921: INFO: stdout: "" Jun 3 00:32:29.922: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4140 execpod-affinityvjj77 -- /bin/sh -x -c nc -zv -t -w 2 10.110.24.151 80' Jun 3 00:32:30.136: INFO: stderr: "I0603 00:32:30.059112 2515 log.go:172] (0xc0009a53f0) (0xc00080fea0) Create stream\nI0603 00:32:30.059174 2515 log.go:172] (0xc0009a53f0) (0xc00080fea0) Stream added, broadcasting: 1\nI0603 00:32:30.062366 2515 log.go:172] (0xc0009a53f0) Reply frame received for 1\nI0603 00:32:30.062421 2515 log.go:172] (0xc0009a53f0) (0xc0006d6f00) Create stream\nI0603 00:32:30.062440 2515 log.go:172] (0xc0009a53f0) (0xc0006d6f00) Stream added, broadcasting: 3\nI0603 00:32:30.063339 2515 log.go:172] (0xc0009a53f0) Reply frame received for 3\nI0603 00:32:30.063406 2515 log.go:172] (0xc0009a53f0) (0xc000856e60) Create stream\nI0603 00:32:30.063424 2515 log.go:172] (0xc0009a53f0) (0xc000856e60) Stream added, broadcasting: 5\nI0603 00:32:30.064126 2515 log.go:172] (0xc0009a53f0) Reply frame received for 5\nI0603 00:32:30.128614 2515 log.go:172] (0xc0009a53f0) Data frame received for 3\nI0603 00:32:30.128660 2515 log.go:172] (0xc0006d6f00) (3) Data frame handling\nI0603 00:32:30.128685 2515 log.go:172] (0xc0009a53f0) Data frame received for 5\nI0603 00:32:30.128695 2515 log.go:172] (0xc000856e60) (5) Data frame handling\nI0603 00:32:30.128711 2515 log.go:172] (0xc000856e60) (5) Data frame sent\nI0603 00:32:30.128721 2515 log.go:172] (0xc0009a53f0) Data frame received for 5\nI0603 00:32:30.128729 2515 log.go:172] (0xc000856e60) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.24.151 80\nConnection to 10.110.24.151 80 port [tcp/http] succeeded!\nI0603 00:32:30.130120 2515 log.go:172] (0xc0009a53f0) Data frame received for 1\nI0603 00:32:30.130144 2515 log.go:172] (0xc00080fea0) (1) Data frame handling\nI0603 00:32:30.130155 2515 log.go:172] (0xc00080fea0) (1) Data frame sent\nI0603 00:32:30.130165 2515 log.go:172] (0xc0009a53f0) (0xc00080fea0) Stream removed, broadcasting: 1\nI0603 00:32:30.130451 2515 log.go:172] (0xc0009a53f0) Go away received\nI0603 00:32:30.130509 2515 log.go:172] (0xc0009a53f0) (0xc00080fea0) Stream removed, broadcasting: 1\nI0603 00:32:30.130538 2515 log.go:172] (0xc0009a53f0) (0xc0006d6f00) Stream removed, broadcasting: 3\nI0603 00:32:30.130567 2515 log.go:172] (0xc0009a53f0) (0xc000856e60) Stream removed, broadcasting: 5\n" Jun 3 00:32:30.137: INFO: stdout: "" Jun 3 00:32:30.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4140 execpod-affinityvjj77 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.110.24.151:80/ ; done' Jun 3 00:32:30.485: INFO: stderr: "I0603 00:32:30.298277 2535 log.go:172] (0xc000bd8dc0) (0xc000bbe5a0) Create stream\nI0603 00:32:30.298333 2535 log.go:172] (0xc000bd8dc0) (0xc000bbe5a0) Stream added, broadcasting: 1\nI0603 00:32:30.301849 2535 log.go:172] (0xc000bd8dc0) Reply frame received for 1\nI0603 00:32:30.301882 2535 log.go:172] (0xc000bd8dc0) (0xc000254dc0) Create stream\nI0603 00:32:30.301891 2535 log.go:172] (0xc000bd8dc0) (0xc000254dc0) Stream added, broadcasting: 3\nI0603 00:32:30.302704 2535 log.go:172] (0xc000bd8dc0) Reply frame received for 3\nI0603 00:32:30.302763 2535 log.go:172] (0xc000bd8dc0) (0xc00015b0e0) Create stream\nI0603 00:32:30.302783 2535 log.go:172] (0xc000bd8dc0) (0xc00015b0e0) Stream added, broadcasting: 5\nI0603 00:32:30.303647 2535 log.go:172] (0xc000bd8dc0) Reply frame received for 5\nI0603 00:32:30.363413 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.363449 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.363465 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.363495 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.363547 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\nI0603 00:32:30.363569 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.379386 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.379417 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.379435 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.379871 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.379914 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.379954 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\nI0603 00:32:30.379975 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.380013 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.380048 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.387662 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.387689 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.387718 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.388633 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.388668 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.388681 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.388698 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.388709 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\nI0603 00:32:30.388720 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.393623 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.393645 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.393662 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.394105 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.394117 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\nI0603 00:32:30.394126 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.394234 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.394255 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.394271 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.397638 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.397653 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.397663 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.398252 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.398277 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\nI0603 00:32:30.398308 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.398331 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.398344 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.398357 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.402235 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.402258 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.402274 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.402713 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.402732 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.402742 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.402752 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.402760 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.402788 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\nI0603 00:32:30.406826 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.406844 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.406852 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.407245 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.407272 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.407293 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.407389 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.407413 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\nI0603 00:32:30.407459 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.420730 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.420767 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.420821 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.421772 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.421789 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.421799 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.421813 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.421845 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\nI0603 00:32:30.421869 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.427112 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.427143 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.427167 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.427678 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.427717 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\nI0603 00:32:30.427745 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\nI0603 00:32:30.427758 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.427768 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.427785 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.427874 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.427898 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.427917 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\nI0603 00:32:30.433075 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.433098 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.433342 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.433741 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.433765 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\nI0603 00:32:30.433785 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\nI0603 00:32:30.433803 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.433811 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.433834 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\nI0603 00:32:30.433993 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.434010 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.434023 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.438257 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.438274 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.438287 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.438673 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.438691 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\nI0603 00:32:30.438702 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.438716 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.438734 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.438755 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.444486 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.444503 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.444524 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.445322 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.445351 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.445362 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.445391 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.445422 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\nI0603 00:32:30.445444 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.449948 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.449969 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.449997 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.450630 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.450656 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.450672 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\nI0603 00:32:30.450682 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.450703 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.450712 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.455817 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.455839 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.455861 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.456368 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.456393 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.456409 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.456430 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.456441 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\nI0603 00:32:30.456452 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\nI0603 00:32:30.456471 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.456481 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.456505 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\nI0603 00:32:30.460409 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.460430 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.460448 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.460796 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.460809 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\nI0603 00:32:30.460817 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.460864 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.460878 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.460890 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.466510 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.466523 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.466530 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.467105 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.467124 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\nI0603 00:32:30.467138 2535 log.go:172] (0xc00015b0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.467232 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.467248 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.467261 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.471919 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.471946 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.471965 2535 log.go:172] (0xc000254dc0) (3) Data frame sent\nI0603 00:32:30.474942 2535 log.go:172] (0xc000bd8dc0) Data frame received for 3\nI0603 00:32:30.474973 2535 log.go:172] (0xc000254dc0) (3) Data frame handling\nI0603 00:32:30.475178 2535 log.go:172] (0xc000bd8dc0) Data frame received for 5\nI0603 00:32:30.475214 2535 log.go:172] (0xc00015b0e0) (5) Data frame handling\nI0603 00:32:30.477298 2535 log.go:172] (0xc000bd8dc0) Data frame received for 1\nI0603 00:32:30.477345 2535 log.go:172] (0xc000bbe5a0) (1) Data frame handling\nI0603 00:32:30.477366 2535 log.go:172] (0xc000bbe5a0) (1) Data frame sent\nI0603 00:32:30.477396 2535 log.go:172] (0xc000bd8dc0) (0xc000bbe5a0) Stream removed, broadcasting: 1\nI0603 00:32:30.477419 2535 log.go:172] (0xc000bd8dc0) Go away received\nI0603 00:32:30.477979 2535 log.go:172] (0xc000bd8dc0) (0xc000bbe5a0) Stream removed, broadcasting: 1\nI0603 00:32:30.478013 2535 log.go:172] (0xc000bd8dc0) (0xc000254dc0) Stream removed, broadcasting: 3\nI0603 00:32:30.478029 2535 log.go:172] (0xc000bd8dc0) (0xc00015b0e0) Stream removed, broadcasting: 5\n" Jun 3 00:32:30.485: INFO: stdout: "\naffinity-clusterip-transition-2dxvn\naffinity-clusterip-transition-2dxvn\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-2dxvn\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-c4sv8\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-2dxvn\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-c4sv8\naffinity-clusterip-transition-c4sv8\naffinity-clusterip-transition-2dxvn\naffinity-clusterip-transition-c4sv8\naffinity-clusterip-transition-2dxvn" Jun 3 00:32:30.485: INFO: Received response from host: Jun 3 00:32:30.485: INFO: Received response from host: affinity-clusterip-transition-2dxvn Jun 3 00:32:30.485: INFO: Received response from host: affinity-clusterip-transition-2dxvn Jun 3 00:32:30.485: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.485: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.485: INFO: Received response from host: affinity-clusterip-transition-2dxvn Jun 3 00:32:30.485: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.485: INFO: Received response from host: affinity-clusterip-transition-c4sv8 Jun 3 00:32:30.485: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.485: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.485: INFO: Received response from host: affinity-clusterip-transition-2dxvn Jun 3 00:32:30.485: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.485: INFO: Received response from host: affinity-clusterip-transition-c4sv8 Jun 3 00:32:30.485: INFO: Received response from host: affinity-clusterip-transition-c4sv8 Jun 3 00:32:30.485: INFO: Received response from host: affinity-clusterip-transition-2dxvn Jun 3 00:32:30.485: INFO: Received response from host: affinity-clusterip-transition-c4sv8 Jun 3 00:32:30.485: INFO: Received response from host: affinity-clusterip-transition-2dxvn Jun 3 00:32:30.493: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4140 execpod-affinityvjj77 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.110.24.151:80/ ; done' Jun 3 00:32:30.802: INFO: stderr: "I0603 00:32:30.624088 2555 log.go:172] (0xc000a80000) (0xc0004c9180) Create stream\nI0603 00:32:30.624139 2555 log.go:172] (0xc000a80000) (0xc0004c9180) Stream added, broadcasting: 1\nI0603 00:32:30.636564 2555 log.go:172] (0xc000a80000) Reply frame received for 1\nI0603 00:32:30.636611 2555 log.go:172] (0xc000a80000) (0xc000161ae0) Create stream\nI0603 00:32:30.636623 2555 log.go:172] (0xc000a80000) (0xc000161ae0) Stream added, broadcasting: 3\nI0603 00:32:30.637857 2555 log.go:172] (0xc000a80000) Reply frame received for 3\nI0603 00:32:30.637902 2555 log.go:172] (0xc000a80000) (0xc00063ed20) Create stream\nI0603 00:32:30.637916 2555 log.go:172] (0xc000a80000) (0xc00063ed20) Stream added, broadcasting: 5\nI0603 00:32:30.639484 2555 log.go:172] (0xc000a80000) Reply frame received for 5\nI0603 00:32:30.696479 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.696506 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.696535 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.696559 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\nI0603 00:32:30.696586 2555 log.go:172] (0xc00063ed20) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.696618 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.702999 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.703022 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.703043 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.703963 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.703985 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\nI0603 00:32:30.703998 2555 log.go:172] (0xc00063ed20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.704081 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.704112 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.704138 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.710598 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.710640 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.710673 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.711792 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.711812 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.711832 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.711846 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.711857 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\nI0603 00:32:30.712082 2555 log.go:172] (0xc00063ed20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.718486 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.718512 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.718546 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.719403 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.719439 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.719453 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.719473 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.719483 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\nI0603 00:32:30.719518 2555 log.go:172] (0xc00063ed20) (5) Data frame sent\nI0603 00:32:30.719543 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.719553 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.719595 2555 log.go:172] (0xc00063ed20) (5) Data frame sent\nI0603 00:32:30.724851 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.724875 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.724896 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.725776 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.725789 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.725795 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.725821 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.725850 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\nI0603 00:32:30.725869 2555 log.go:172] (0xc00063ed20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.731248 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.731267 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.731281 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.732122 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.732138 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.732148 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.732167 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.732180 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\nI0603 00:32:30.732191 2555 log.go:172] (0xc00063ed20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.740221 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.740247 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.740267 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.740875 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.740911 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.740921 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.740949 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.740985 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\nI0603 00:32:30.741019 2555 log.go:172] (0xc00063ed20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.746101 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.746130 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.746157 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.746798 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.746822 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\nI0603 00:32:30.746837 2555 log.go:172] (0xc00063ed20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.746856 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.746884 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.746906 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.750376 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.750409 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.750447 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.750839 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.750868 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.750924 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\nI0603 00:32:30.750941 2555 log.go:172] (0xc00063ed20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.750970 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.750991 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.755700 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.755736 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.755766 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.756189 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.756213 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.756226 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.756246 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.756263 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\nI0603 00:32:30.756276 2555 log.go:172] (0xc00063ed20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.760141 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.760176 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.760204 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.760507 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.760527 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.760547 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.760576 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\nI0603 00:32:30.760600 2555 log.go:172] (0xc00063ed20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.760621 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.764840 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.764857 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.764864 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.766021 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.766058 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\nI0603 00:32:30.766079 2555 log.go:172] (0xc00063ed20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.766106 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.766128 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.766149 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.770289 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.770315 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.770344 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.770784 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.770806 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.770838 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.770874 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\nI0603 00:32:30.770893 2555 log.go:172] (0xc00063ed20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.770922 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.775307 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.775334 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.775355 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.776031 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.776050 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\nI0603 00:32:30.776065 2555 log.go:172] (0xc00063ed20) (5) Data frame sent\nI0603 00:32:30.776078 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.776089 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.776100 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.780849 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.780881 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.780915 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.781719 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.781750 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.781785 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\nI0603 00:32:30.781798 2555 log.go:172] (0xc00063ed20) (5) Data frame sent\nI0603 00:32:30.781810 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.781821 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.781843 2555 log.go:172] (0xc00063ed20) (5) Data frame sent\nI0603 00:32:30.782026 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.782049 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.786145 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.786169 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.786193 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.787099 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.787126 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.787152 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.787172 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.787188 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\nI0603 00:32:30.787210 2555 log.go:172] (0xc00063ed20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.24.151:80/\nI0603 00:32:30.792321 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.792344 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.792363 2555 log.go:172] (0xc000161ae0) (3) Data frame sent\nI0603 00:32:30.793018 2555 log.go:172] (0xc000a80000) Data frame received for 5\nI0603 00:32:30.793063 2555 log.go:172] (0xc00063ed20) (5) Data frame handling\nI0603 00:32:30.793315 2555 log.go:172] (0xc000a80000) Data frame received for 3\nI0603 00:32:30.793353 2555 log.go:172] (0xc000161ae0) (3) Data frame handling\nI0603 00:32:30.795293 2555 log.go:172] (0xc000a80000) Data frame received for 1\nI0603 00:32:30.795319 2555 log.go:172] (0xc0004c9180) (1) Data frame handling\nI0603 00:32:30.795336 2555 log.go:172] (0xc0004c9180) (1) Data frame sent\nI0603 00:32:30.795358 2555 log.go:172] (0xc000a80000) (0xc0004c9180) Stream removed, broadcasting: 1\nI0603 00:32:30.795380 2555 log.go:172] (0xc000a80000) Go away received\nI0603 00:32:30.795763 2555 log.go:172] (0xc000a80000) (0xc0004c9180) Stream removed, broadcasting: 1\nI0603 00:32:30.795792 2555 log.go:172] (0xc000a80000) (0xc000161ae0) Stream removed, broadcasting: 3\nI0603 00:32:30.795803 2555 log.go:172] (0xc000a80000) (0xc00063ed20) Stream removed, broadcasting: 5\n" Jun 3 00:32:30.803: INFO: stdout: "\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-t4k4b\naffinity-clusterip-transition-t4k4b" Jun 3 00:32:30.803: INFO: Received response from host: Jun 3 00:32:30.803: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.803: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.803: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.803: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.803: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.803: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.803: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.803: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.803: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.803: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.803: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.803: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.803: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.803: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.803: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.803: INFO: Received response from host: affinity-clusterip-transition-t4k4b Jun 3 00:32:30.803: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-4140, will wait for the garbage collector to delete the pods Jun 3 00:32:31.008: INFO: Deleting ReplicationController affinity-clusterip-transition took: 95.856414ms Jun 3 00:32:31.308: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 300.211404ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:32:45.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4140" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:29.924 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":201,"skipped":3323,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:32:45.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 3 00:32:45.415: INFO: Waiting up to 5m0s for pod "pod-248848cb-141d-4581-bbdd-683d50473fd3" in namespace "emptydir-5977" to be "Succeeded or Failed" Jun 3 00:32:45.427: INFO: Pod "pod-248848cb-141d-4581-bbdd-683d50473fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.7883ms Jun 3 00:32:47.431: INFO: Pod "pod-248848cb-141d-4581-bbdd-683d50473fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015475919s Jun 3 00:32:49.435: INFO: Pod "pod-248848cb-141d-4581-bbdd-683d50473fd3": Phase="Running", Reason="", readiness=true. Elapsed: 4.020067213s Jun 3 00:32:51.440: INFO: Pod "pod-248848cb-141d-4581-bbdd-683d50473fd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024506098s STEP: Saw pod success Jun 3 00:32:51.440: INFO: Pod "pod-248848cb-141d-4581-bbdd-683d50473fd3" satisfied condition "Succeeded or Failed" Jun 3 00:32:51.449: INFO: Trying to get logs from node latest-worker pod pod-248848cb-141d-4581-bbdd-683d50473fd3 container test-container: STEP: delete the pod Jun 3 00:32:51.498: INFO: Waiting for pod pod-248848cb-141d-4581-bbdd-683d50473fd3 to disappear Jun 3 00:32:51.510: INFO: Pod pod-248848cb-141d-4581-bbdd-683d50473fd3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:32:51.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5977" for this suite. • [SLOW TEST:6.157 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":202,"skipped":3330,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:32:51.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 3 00:32:51.607: INFO: Waiting up to 5m0s for pod "pod-b0ecf7c4-1530-4be7-9b37-f29d26ff2454" in namespace "emptydir-8997" to be "Succeeded or Failed" Jun 3 00:32:51.619: INFO: Pod "pod-b0ecf7c4-1530-4be7-9b37-f29d26ff2454": Phase="Pending", Reason="", readiness=false. Elapsed: 11.998144ms Jun 3 00:32:53.623: INFO: Pod "pod-b0ecf7c4-1530-4be7-9b37-f29d26ff2454": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016256143s Jun 3 00:32:55.628: INFO: Pod "pod-b0ecf7c4-1530-4be7-9b37-f29d26ff2454": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020822628s STEP: Saw pod success Jun 3 00:32:55.628: INFO: Pod "pod-b0ecf7c4-1530-4be7-9b37-f29d26ff2454" satisfied condition "Succeeded or Failed" Jun 3 00:32:55.631: INFO: Trying to get logs from node latest-worker2 pod pod-b0ecf7c4-1530-4be7-9b37-f29d26ff2454 container test-container: STEP: delete the pod Jun 3 00:32:55.669: INFO: Waiting for pod pod-b0ecf7c4-1530-4be7-9b37-f29d26ff2454 to disappear Jun 3 00:32:55.672: INFO: Pod pod-b0ecf7c4-1530-4be7-9b37-f29d26ff2454 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:32:55.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8997" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":203,"skipped":3341,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:32:55.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:33:00.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6248" for this suite. • [SLOW TEST:5.121 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":204,"skipped":3343,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:33:00.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 3 00:33:05.029: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:33:05.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3218" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":205,"skipped":3346,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:33:05.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 3 00:33:05.223: INFO: Waiting up to 5m0s for pod "pod-48ab08cb-39b3-4172-ae70-b4d613601243" in namespace "emptydir-8285" to be "Succeeded or Failed" Jun 3 00:33:05.247: INFO: Pod "pod-48ab08cb-39b3-4172-ae70-b4d613601243": Phase="Pending", Reason="", readiness=false. Elapsed: 23.654636ms Jun 3 00:33:07.383: INFO: Pod "pod-48ab08cb-39b3-4172-ae70-b4d613601243": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159216014s Jun 3 00:33:09.388: INFO: Pod "pod-48ab08cb-39b3-4172-ae70-b4d613601243": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.164579177s STEP: Saw pod success Jun 3 00:33:09.388: INFO: Pod "pod-48ab08cb-39b3-4172-ae70-b4d613601243" satisfied condition "Succeeded or Failed" Jun 3 00:33:09.390: INFO: Trying to get logs from node latest-worker2 pod pod-48ab08cb-39b3-4172-ae70-b4d613601243 container test-container: STEP: delete the pod Jun 3 00:33:09.427: INFO: Waiting for pod pod-48ab08cb-39b3-4172-ae70-b4d613601243 to disappear Jun 3 00:33:09.433: INFO: Pod pod-48ab08cb-39b3-4172-ae70-b4d613601243 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:33:09.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8285" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":206,"skipped":3351,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:33:09.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode Jun 3 00:33:09.531: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9414" to be "Succeeded or Failed" Jun 3 00:33:09.535: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141713ms Jun 3 00:33:11.539: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007469823s Jun 3 00:33:13.610: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079322374s Jun 3 00:33:15.615: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083595576s STEP: Saw pod success Jun 3 00:33:15.615: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Jun 3 00:33:15.618: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Jun 3 00:33:15.690: INFO: Waiting for pod pod-host-path-test to disappear Jun 3 00:33:15.702: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:33:15.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9414" for this suite. • [SLOW TEST:6.271 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":207,"skipped":3374,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:33:15.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:33:19.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1374" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":208,"skipped":3379,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:33:19.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1923 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1923;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1923 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1923;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1923.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1923.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1923.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1923.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1923.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1923.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1923.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1923.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1923.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1923.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1923.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1923.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1923.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 246.71.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.71.246_udp@PTR;check="$$(dig +tcp +noall +answer +search 246.71.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.71.246_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1923 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1923;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1923 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1923;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1923.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1923.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1923.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1923.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1923.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1923.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1923.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1923.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1923.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1923.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1923.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1923.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1923.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 246.71.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.71.246_udp@PTR;check="$$(dig +tcp +noall +answer +search 246.71.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.71.246_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 00:33:26.066: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:26.070: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:26.073: INFO: Unable to read wheezy_udp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:26.076: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:26.079: INFO: Unable to read wheezy_udp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:26.082: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:26.084: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:26.129: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:26.133: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:26.135: INFO: Unable to read jessie_udp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:26.138: INFO: Unable to read jessie_tcp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:26.140: INFO: Unable to read jessie_udp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:26.143: INFO: Unable to read jessie_tcp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:26.147: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:26.167: INFO: Lookups using dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1923 wheezy_tcp@dns-test-service.dns-1923 wheezy_udp@dns-test-service.dns-1923.svc wheezy_tcp@dns-test-service.dns-1923.svc wheezy_udp@_http._tcp.dns-test-service.dns-1923.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1923 jessie_tcp@dns-test-service.dns-1923 jessie_udp@dns-test-service.dns-1923.svc jessie_tcp@dns-test-service.dns-1923.svc jessie_udp@_http._tcp.dns-test-service.dns-1923.svc] Jun 3 00:33:31.173: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:31.178: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:31.182: INFO: Unable to read wheezy_udp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:31.186: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:31.189: INFO: Unable to read wheezy_udp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:31.192: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:31.220: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:31.222: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:31.225: INFO: Unable to read jessie_udp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:31.228: INFO: Unable to read jessie_tcp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:31.231: INFO: Unable to read jessie_udp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:31.234: INFO: Unable to read jessie_tcp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:31.260: INFO: Lookups using dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1923 wheezy_tcp@dns-test-service.dns-1923 wheezy_udp@dns-test-service.dns-1923.svc wheezy_tcp@dns-test-service.dns-1923.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1923 jessie_tcp@dns-test-service.dns-1923 jessie_udp@dns-test-service.dns-1923.svc jessie_tcp@dns-test-service.dns-1923.svc] Jun 3 00:33:36.172: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:36.209: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:36.213: INFO: Unable to read wheezy_udp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:36.216: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:36.220: INFO: Unable to read wheezy_udp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:36.223: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:36.254: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:36.257: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:36.261: INFO: Unable to read jessie_udp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:36.264: INFO: Unable to read jessie_tcp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:36.267: INFO: Unable to read jessie_udp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:36.271: INFO: Unable to read jessie_tcp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:36.295: INFO: Lookups using dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1923 wheezy_tcp@dns-test-service.dns-1923 wheezy_udp@dns-test-service.dns-1923.svc wheezy_tcp@dns-test-service.dns-1923.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1923 jessie_tcp@dns-test-service.dns-1923 jessie_udp@dns-test-service.dns-1923.svc jessie_tcp@dns-test-service.dns-1923.svc] Jun 3 00:33:41.173: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:41.176: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:41.180: INFO: Unable to read wheezy_udp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:41.182: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:41.185: INFO: Unable to read wheezy_udp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:41.188: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:41.211: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:41.214: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:41.216: INFO: Unable to read jessie_udp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:41.218: INFO: Unable to read jessie_tcp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:41.220: INFO: Unable to read jessie_udp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:41.222: INFO: Unable to read jessie_tcp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:41.259: INFO: Lookups using dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1923 wheezy_tcp@dns-test-service.dns-1923 wheezy_udp@dns-test-service.dns-1923.svc wheezy_tcp@dns-test-service.dns-1923.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1923 jessie_tcp@dns-test-service.dns-1923 jessie_udp@dns-test-service.dns-1923.svc jessie_tcp@dns-test-service.dns-1923.svc] Jun 3 00:33:46.173: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:46.177: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:46.180: INFO: Unable to read wheezy_udp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:46.184: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:46.187: INFO: Unable to read wheezy_udp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:46.190: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:46.217: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:46.220: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:46.223: INFO: Unable to read jessie_udp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:46.226: INFO: Unable to read jessie_tcp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:46.230: INFO: Unable to read jessie_udp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:46.232: INFO: Unable to read jessie_tcp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:46.256: INFO: Lookups using dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1923 wheezy_tcp@dns-test-service.dns-1923 wheezy_udp@dns-test-service.dns-1923.svc wheezy_tcp@dns-test-service.dns-1923.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1923 jessie_tcp@dns-test-service.dns-1923 jessie_udp@dns-test-service.dns-1923.svc jessie_tcp@dns-test-service.dns-1923.svc] Jun 3 00:33:51.172: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:51.175: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:51.177: INFO: Unable to read wheezy_udp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:51.180: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:51.182: INFO: Unable to read wheezy_udp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:51.185: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:51.208: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:51.210: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:51.213: INFO: Unable to read jessie_udp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:51.215: INFO: Unable to read jessie_tcp@dns-test-service.dns-1923 from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:51.217: INFO: Unable to read jessie_udp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:51.220: INFO: Unable to read jessie_tcp@dns-test-service.dns-1923.svc from pod dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993: the server could not find the requested resource (get pods dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993) Jun 3 00:33:51.242: INFO: Lookups using dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1923 wheezy_tcp@dns-test-service.dns-1923 wheezy_udp@dns-test-service.dns-1923.svc wheezy_tcp@dns-test-service.dns-1923.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1923 jessie_tcp@dns-test-service.dns-1923 jessie_udp@dns-test-service.dns-1923.svc jessie_tcp@dns-test-service.dns-1923.svc] Jun 3 00:33:56.255: INFO: DNS probes using dns-1923/dns-test-9f62a2ab-24ee-41dd-917a-7526ffe8c993 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:33:56.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1923" for this suite. • [SLOW TEST:36.936 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":209,"skipped":3400,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:33:56.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-9e166cbd-86fe-49bd-b924-61eef6f283ad STEP: Creating a pod to test consume configMaps Jun 3 00:33:56.854: INFO: Waiting up to 5m0s for pod "pod-configmaps-b2d31d25-7cf0-42d2-bcbe-b79a83db9474" in namespace "configmap-342" to be "Succeeded or Failed" Jun 3 00:33:56.860: INFO: Pod "pod-configmaps-b2d31d25-7cf0-42d2-bcbe-b79a83db9474": Phase="Pending", Reason="", readiness=false. Elapsed: 5.924405ms Jun 3 00:33:58.934: INFO: Pod "pod-configmaps-b2d31d25-7cf0-42d2-bcbe-b79a83db9474": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079890806s Jun 3 00:34:00.939: INFO: Pod "pod-configmaps-b2d31d25-7cf0-42d2-bcbe-b79a83db9474": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084265885s STEP: Saw pod success Jun 3 00:34:00.939: INFO: Pod "pod-configmaps-b2d31d25-7cf0-42d2-bcbe-b79a83db9474" satisfied condition "Succeeded or Failed" Jun 3 00:34:00.941: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-b2d31d25-7cf0-42d2-bcbe-b79a83db9474 container configmap-volume-test: STEP: delete the pod Jun 3 00:34:01.349: INFO: Waiting for pod pod-configmaps-b2d31d25-7cf0-42d2-bcbe-b79a83db9474 to disappear Jun 3 00:34:01.416: INFO: Pod pod-configmaps-b2d31d25-7cf0-42d2-bcbe-b79a83db9474 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:34:01.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-342" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":210,"skipped":3419,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:34:01.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 3 00:34:02.330: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jun 3 00:34:04.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741242, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741242, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741242, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741242, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 00:34:07.369: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:34:07.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:34:08.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4416" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.385 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":211,"skipped":3426,"failed":0} SS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:34:08.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6908 Jun 3 00:34:12.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6908 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jun 3 00:34:13.190: INFO: stderr: "I0603 00:34:13.024310 2573 log.go:172] (0xc000a26790) (0xc000430e60) Create stream\nI0603 00:34:13.024383 2573 log.go:172] (0xc000a26790) (0xc000430e60) Stream added, broadcasting: 1\nI0603 00:34:13.028207 2573 log.go:172] (0xc000a26790) Reply frame received for 1\nI0603 00:34:13.028235 2573 log.go:172] (0xc000a26790) (0xc000139ea0) Create stream\nI0603 00:34:13.028248 2573 log.go:172] (0xc000a26790) (0xc000139ea0) Stream added, broadcasting: 3\nI0603 00:34:13.029581 2573 log.go:172] (0xc000a26790) Reply frame received for 3\nI0603 00:34:13.029628 2573 log.go:172] (0xc000a26790) (0xc000300280) Create stream\nI0603 00:34:13.029639 2573 log.go:172] (0xc000a26790) (0xc000300280) Stream added, broadcasting: 5\nI0603 00:34:13.030763 2573 log.go:172] (0xc000a26790) Reply frame received for 5\nI0603 00:34:13.151345 2573 log.go:172] (0xc000a26790) Data frame received for 5\nI0603 00:34:13.151366 2573 log.go:172] (0xc000300280) (5) Data frame handling\nI0603 00:34:13.151381 2573 log.go:172] (0xc000300280) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0603 00:34:13.181663 2573 log.go:172] (0xc000a26790) Data frame received for 3\nI0603 00:34:13.181693 2573 log.go:172] (0xc000139ea0) (3) Data frame handling\nI0603 00:34:13.181714 2573 log.go:172] (0xc000139ea0) (3) Data frame sent\nI0603 00:34:13.182673 2573 log.go:172] (0xc000a26790) Data frame received for 5\nI0603 00:34:13.182712 2573 log.go:172] (0xc000300280) (5) Data frame handling\nI0603 00:34:13.182736 2573 log.go:172] (0xc000a26790) Data frame received for 3\nI0603 00:34:13.182762 2573 log.go:172] (0xc000139ea0) (3) Data frame handling\nI0603 00:34:13.184533 2573 log.go:172] (0xc000a26790) Data frame received for 1\nI0603 00:34:13.184552 2573 log.go:172] (0xc000430e60) (1) Data frame handling\nI0603 00:34:13.184562 2573 log.go:172] (0xc000430e60) (1) Data frame sent\nI0603 00:34:13.184571 2573 log.go:172] (0xc000a26790) (0xc000430e60) Stream removed, broadcasting: 1\nI0603 00:34:13.184580 2573 log.go:172] (0xc000a26790) Go away received\nI0603 00:34:13.184888 2573 log.go:172] (0xc000a26790) (0xc000430e60) Stream removed, broadcasting: 1\nI0603 00:34:13.184904 2573 log.go:172] (0xc000a26790) (0xc000139ea0) Stream removed, broadcasting: 3\nI0603 00:34:13.184910 2573 log.go:172] (0xc000a26790) (0xc000300280) Stream removed, broadcasting: 5\n" Jun 3 00:34:13.190: INFO: stdout: "iptables" Jun 3 00:34:13.190: INFO: proxyMode: iptables Jun 3 00:34:13.195: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 3 00:34:13.218: INFO: Pod kube-proxy-mode-detector still exists Jun 3 00:34:15.218: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 3 00:34:15.223: INFO: Pod kube-proxy-mode-detector still exists Jun 3 00:34:17.218: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 3 00:34:17.223: INFO: Pod kube-proxy-mode-detector still exists Jun 3 00:34:19.218: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 3 00:34:19.222: INFO: Pod kube-proxy-mode-detector still exists Jun 3 00:34:21.218: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 3 00:34:21.222: INFO: Pod kube-proxy-mode-detector still exists Jun 3 00:34:23.220: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 3 00:34:23.223: INFO: Pod kube-proxy-mode-detector still exists Jun 3 00:34:25.218: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 3 00:34:25.222: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-6908 STEP: creating replication controller affinity-nodeport-timeout in namespace services-6908 I0603 00:34:25.268425 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-6908, replica count: 3 I0603 00:34:28.318903 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 00:34:31.319108 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 00:34:31.327: INFO: Creating new exec pod Jun 3 00:34:36.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6908 execpod-affinityxh7mk -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Jun 3 00:34:36.679: INFO: stderr: "I0603 00:34:36.560905 2597 log.go:172] (0xc00097d3f0) (0xc00092c5a0) Create stream\nI0603 00:34:36.560970 2597 log.go:172] (0xc00097d3f0) (0xc00092c5a0) Stream added, broadcasting: 1\nI0603 00:34:36.568277 2597 log.go:172] (0xc00097d3f0) Reply frame received for 1\nI0603 00:34:36.568324 2597 log.go:172] (0xc00097d3f0) (0xc0008700a0) Create stream\nI0603 00:34:36.568335 2597 log.go:172] (0xc00097d3f0) (0xc0008700a0) Stream added, broadcasting: 3\nI0603 00:34:36.569103 2597 log.go:172] (0xc00097d3f0) Reply frame received for 3\nI0603 00:34:36.569270 2597 log.go:172] (0xc00097d3f0) (0xc0006783c0) Create stream\nI0603 00:34:36.569283 2597 log.go:172] (0xc00097d3f0) (0xc0006783c0) Stream added, broadcasting: 5\nI0603 00:34:36.569994 2597 log.go:172] (0xc00097d3f0) Reply frame received for 5\nI0603 00:34:36.670818 2597 log.go:172] (0xc00097d3f0) Data frame received for 5\nI0603 00:34:36.670852 2597 log.go:172] (0xc0006783c0) (5) Data frame handling\nI0603 00:34:36.670878 2597 log.go:172] (0xc0006783c0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0603 00:34:36.671523 2597 log.go:172] (0xc00097d3f0) Data frame received for 5\nI0603 00:34:36.671536 2597 log.go:172] (0xc0006783c0) (5) Data frame handling\nI0603 00:34:36.671542 2597 log.go:172] (0xc0006783c0) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0603 00:34:36.672095 2597 log.go:172] (0xc00097d3f0) Data frame received for 5\nI0603 00:34:36.672113 2597 log.go:172] (0xc0006783c0) (5) Data frame handling\nI0603 00:34:36.672292 2597 log.go:172] (0xc00097d3f0) Data frame received for 3\nI0603 00:34:36.672353 2597 log.go:172] (0xc0008700a0) (3) Data frame handling\nI0603 00:34:36.673950 2597 log.go:172] (0xc00097d3f0) Data frame received for 1\nI0603 00:34:36.673970 2597 log.go:172] (0xc00092c5a0) (1) Data frame handling\nI0603 00:34:36.673993 2597 log.go:172] (0xc00092c5a0) (1) Data frame sent\nI0603 00:34:36.674014 2597 log.go:172] (0xc00097d3f0) (0xc00092c5a0) Stream removed, broadcasting: 1\nI0603 00:34:36.674044 2597 log.go:172] (0xc00097d3f0) Go away received\nI0603 00:34:36.674376 2597 log.go:172] (0xc00097d3f0) (0xc00092c5a0) Stream removed, broadcasting: 1\nI0603 00:34:36.674393 2597 log.go:172] (0xc00097d3f0) (0xc0008700a0) Stream removed, broadcasting: 3\nI0603 00:34:36.674401 2597 log.go:172] (0xc00097d3f0) (0xc0006783c0) Stream removed, broadcasting: 5\n" Jun 3 00:34:36.679: INFO: stdout: "" Jun 3 00:34:36.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6908 execpod-affinityxh7mk -- /bin/sh -x -c nc -zv -t -w 2 10.98.40.139 80' Jun 3 00:34:36.886: INFO: stderr: "I0603 00:34:36.819125 2617 log.go:172] (0xc0004a40b0) (0xc000644500) Create stream\nI0603 00:34:36.819194 2617 log.go:172] (0xc0004a40b0) (0xc000644500) Stream added, broadcasting: 1\nI0603 00:34:36.821429 2617 log.go:172] (0xc0004a40b0) Reply frame received for 1\nI0603 00:34:36.821472 2617 log.go:172] (0xc0004a40b0) (0xc00014d4a0) Create stream\nI0603 00:34:36.821496 2617 log.go:172] (0xc0004a40b0) (0xc00014d4a0) Stream added, broadcasting: 3\nI0603 00:34:36.822216 2617 log.go:172] (0xc0004a40b0) Reply frame received for 3\nI0603 00:34:36.822237 2617 log.go:172] (0xc0004a40b0) (0xc000644dc0) Create stream\nI0603 00:34:36.822244 2617 log.go:172] (0xc0004a40b0) (0xc000644dc0) Stream added, broadcasting: 5\nI0603 00:34:36.822873 2617 log.go:172] (0xc0004a40b0) Reply frame received for 5\nI0603 00:34:36.877890 2617 log.go:172] (0xc0004a40b0) Data frame received for 5\nI0603 00:34:36.877934 2617 log.go:172] (0xc000644dc0) (5) Data frame handling\nI0603 00:34:36.877949 2617 log.go:172] (0xc000644dc0) (5) Data frame sent\nI0603 00:34:36.877958 2617 log.go:172] (0xc0004a40b0) Data frame received for 5\nI0603 00:34:36.877963 2617 log.go:172] (0xc000644dc0) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.40.139 80\nConnection to 10.98.40.139 80 port [tcp/http] succeeded!\nI0603 00:34:36.877980 2617 log.go:172] (0xc0004a40b0) Data frame received for 3\nI0603 00:34:36.877986 2617 log.go:172] (0xc00014d4a0) (3) Data frame handling\nI0603 00:34:36.879410 2617 log.go:172] (0xc0004a40b0) Data frame received for 1\nI0603 00:34:36.879436 2617 log.go:172] (0xc000644500) (1) Data frame handling\nI0603 00:34:36.879447 2617 log.go:172] (0xc000644500) (1) Data frame sent\nI0603 00:34:36.879460 2617 log.go:172] (0xc0004a40b0) (0xc000644500) Stream removed, broadcasting: 1\nI0603 00:34:36.879778 2617 log.go:172] (0xc0004a40b0) (0xc000644500) Stream removed, broadcasting: 1\nI0603 00:34:36.879796 2617 log.go:172] (0xc0004a40b0) (0xc00014d4a0) Stream removed, broadcasting: 3\nI0603 00:34:36.879927 2617 log.go:172] (0xc0004a40b0) (0xc000644dc0) Stream removed, broadcasting: 5\n" Jun 3 00:34:36.886: INFO: stdout: "" Jun 3 00:34:36.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6908 execpod-affinityxh7mk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30968' Jun 3 00:34:37.079: INFO: stderr: "I0603 00:34:37.014017 2639 log.go:172] (0xc000af9810) (0xc000b445a0) Create stream\nI0603 00:34:37.014073 2639 log.go:172] (0xc000af9810) (0xc000b445a0) Stream added, broadcasting: 1\nI0603 00:34:37.019406 2639 log.go:172] (0xc000af9810) Reply frame received for 1\nI0603 00:34:37.019469 2639 log.go:172] (0xc000af9810) (0xc00042c5a0) Create stream\nI0603 00:34:37.019490 2639 log.go:172] (0xc000af9810) (0xc00042c5a0) Stream added, broadcasting: 3\nI0603 00:34:37.020449 2639 log.go:172] (0xc000af9810) Reply frame received for 3\nI0603 00:34:37.020488 2639 log.go:172] (0xc000af9810) (0xc0000f3900) Create stream\nI0603 00:34:37.020497 2639 log.go:172] (0xc000af9810) (0xc0000f3900) Stream added, broadcasting: 5\nI0603 00:34:37.021602 2639 log.go:172] (0xc000af9810) Reply frame received for 5\nI0603 00:34:37.072130 2639 log.go:172] (0xc000af9810) Data frame received for 5\nI0603 00:34:37.072205 2639 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0603 00:34:37.072234 2639 log.go:172] (0xc0000f3900) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 30968\nConnection to 172.17.0.13 30968 port [tcp/30968] succeeded!\nI0603 00:34:37.072266 2639 log.go:172] (0xc000af9810) Data frame received for 3\nI0603 00:34:37.072293 2639 log.go:172] (0xc00042c5a0) (3) Data frame handling\nI0603 00:34:37.072318 2639 log.go:172] (0xc000af9810) Data frame received for 5\nI0603 00:34:37.072339 2639 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0603 00:34:37.074445 2639 log.go:172] (0xc000af9810) Data frame received for 1\nI0603 00:34:37.074539 2639 log.go:172] (0xc000b445a0) (1) Data frame handling\nI0603 00:34:37.074606 2639 log.go:172] (0xc000b445a0) (1) Data frame sent\nI0603 00:34:37.074630 2639 log.go:172] (0xc000af9810) (0xc000b445a0) Stream removed, broadcasting: 1\nI0603 00:34:37.074644 2639 log.go:172] (0xc000af9810) Go away received\nI0603 00:34:37.075031 2639 log.go:172] (0xc000af9810) (0xc000b445a0) Stream removed, broadcasting: 1\nI0603 00:34:37.075052 2639 log.go:172] (0xc000af9810) (0xc00042c5a0) Stream removed, broadcasting: 3\nI0603 00:34:37.075060 2639 log.go:172] (0xc000af9810) (0xc0000f3900) Stream removed, broadcasting: 5\n" Jun 3 00:34:37.079: INFO: stdout: "" Jun 3 00:34:37.079: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6908 execpod-affinityxh7mk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30968' Jun 3 00:34:37.308: INFO: stderr: "I0603 00:34:37.224229 2660 log.go:172] (0xc000b7d290) (0xc000bac460) Create stream\nI0603 00:34:37.224283 2660 log.go:172] (0xc000b7d290) (0xc000bac460) Stream added, broadcasting: 1\nI0603 00:34:37.226553 2660 log.go:172] (0xc000b7d290) Reply frame received for 1\nI0603 00:34:37.226581 2660 log.go:172] (0xc000b7d290) (0xc000698e60) Create stream\nI0603 00:34:37.226589 2660 log.go:172] (0xc000b7d290) (0xc000698e60) Stream added, broadcasting: 3\nI0603 00:34:37.227327 2660 log.go:172] (0xc000b7d290) Reply frame received for 3\nI0603 00:34:37.227350 2660 log.go:172] (0xc000b7d290) (0xc000bac500) Create stream\nI0603 00:34:37.227356 2660 log.go:172] (0xc000b7d290) (0xc000bac500) Stream added, broadcasting: 5\nI0603 00:34:37.227954 2660 log.go:172] (0xc000b7d290) Reply frame received for 5\nI0603 00:34:37.299973 2660 log.go:172] (0xc000b7d290) Data frame received for 5\nI0603 00:34:37.300010 2660 log.go:172] (0xc000bac500) (5) Data frame handling\nI0603 00:34:37.300037 2660 log.go:172] (0xc000bac500) (5) Data frame sent\nI0603 00:34:37.300063 2660 log.go:172] (0xc000b7d290) Data frame received for 5\nI0603 00:34:37.300074 2660 log.go:172] (0xc000bac500) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30968\nConnection to 172.17.0.12 30968 port [tcp/30968] succeeded!\nI0603 00:34:37.300214 2660 log.go:172] (0xc000b7d290) Data frame received for 3\nI0603 00:34:37.300228 2660 log.go:172] (0xc000698e60) (3) Data frame handling\nI0603 00:34:37.302033 2660 log.go:172] (0xc000b7d290) Data frame received for 1\nI0603 00:34:37.302056 2660 log.go:172] (0xc000bac460) (1) Data frame handling\nI0603 00:34:37.302070 2660 log.go:172] (0xc000bac460) (1) Data frame sent\nI0603 00:34:37.302087 2660 log.go:172] (0xc000b7d290) (0xc000bac460) Stream removed, broadcasting: 1\nI0603 00:34:37.302123 2660 log.go:172] (0xc000b7d290) Go away received\nI0603 00:34:37.302374 2660 log.go:172] (0xc000b7d290) (0xc000bac460) Stream removed, broadcasting: 1\nI0603 00:34:37.302391 2660 log.go:172] (0xc000b7d290) (0xc000698e60) Stream removed, broadcasting: 3\nI0603 00:34:37.302411 2660 log.go:172] (0xc000b7d290) (0xc000bac500) Stream removed, broadcasting: 5\n" Jun 3 00:34:37.308: INFO: stdout: "" Jun 3 00:34:37.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6908 execpod-affinityxh7mk -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30968/ ; done' Jun 3 00:34:37.660: INFO: stderr: "I0603 00:34:37.433023 2680 log.go:172] (0xc0009f6630) (0xc0006c1860) Create stream\nI0603 00:34:37.433083 2680 log.go:172] (0xc0009f6630) (0xc0006c1860) Stream added, broadcasting: 1\nI0603 00:34:37.435214 2680 log.go:172] (0xc0009f6630) Reply frame received for 1\nI0603 00:34:37.435247 2680 log.go:172] (0xc0009f6630) (0xc0006c1ae0) Create stream\nI0603 00:34:37.435262 2680 log.go:172] (0xc0009f6630) (0xc0006c1ae0) Stream added, broadcasting: 3\nI0603 00:34:37.436165 2680 log.go:172] (0xc0009f6630) Reply frame received for 3\nI0603 00:34:37.436196 2680 log.go:172] (0xc0009f6630) (0xc00039c6e0) Create stream\nI0603 00:34:37.436206 2680 log.go:172] (0xc0009f6630) (0xc00039c6e0) Stream added, broadcasting: 5\nI0603 00:34:37.437009 2680 log.go:172] (0xc0009f6630) Reply frame received for 5\nI0603 00:34:37.574277 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.574319 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.574334 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.574351 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.574364 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0603 00:34:37.574388 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30968/\nI0603 00:34:37.576968 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.576980 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.576985 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.577559 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.577568 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.577573 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.577602 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.577631 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0603 00:34:37.577642 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30968/\nI0603 00:34:37.580242 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.580250 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.580255 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.580921 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.580951 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.580983 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0603 00:34:37.581001 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0603 00:34:37.581015 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.581029 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30968/\nI0603 00:34:37.581050 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.581067 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.581079 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0603 00:34:37.584158 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.584173 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.584186 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.584520 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.584534 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.584543 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.584559 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.584579 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0603 00:34:37.584590 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30968/\nI0603 00:34:37.588481 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.588500 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.588515 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.588834 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.588846 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.588856 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.588867 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.588874 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0603 00:34:37.588881 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30968/\nI0603 00:34:37.593856 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.593880 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.593896 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.594068 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.594094 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.594113 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.594137 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.594152 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0603 00:34:37.594175 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30968/\nI0603 00:34:37.598414 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.598425 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.598431 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.598739 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.598753 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.598767 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0603 00:34:37.598772 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30968/\nI0603 00:34:37.598780 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.598784 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.602723 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.602733 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.602739 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.603094 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.603102 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.603106 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.603115 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.603126 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0603 00:34:37.603136 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30968/\nI0603 00:34:37.607468 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.607483 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.607490 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.608360 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.608374 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0603 00:34:37.608385 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30968/\nI0603 00:34:37.608397 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.608419 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.608432 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.613744 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.613765 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.613785 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.614118 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.614147 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0603 00:34:37.614166 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30968/\nI0603 00:34:37.614187 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.614202 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.614221 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.617933 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.617947 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.617955 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.618735 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.618774 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\n+ echo\nI0603 00:34:37.618790 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.618821 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.618840 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.618863 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0603 00:34:37.618880 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.618897 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0603 00:34:37.618919 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30968/\nI0603 00:34:37.622089 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.622109 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.622119 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.622415 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.622438 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.622450 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.622466 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.622478 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0603 00:34:37.622488 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0603 00:34:37.622499 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.622508 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30968/\nI0603 00:34:37.622527 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0603 00:34:37.627106 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.627125 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.627147 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.627602 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.627637 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.627654 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.627679 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.627697 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0603 00:34:37.627720 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0603 00:34:37.627748 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.627766 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30968/\nI0603 00:34:37.627818 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0603 00:34:37.633454 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.633477 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.633503 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.633926 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.633944 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0603 00:34:37.633962 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0603 00:34:37.634060 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.634094 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.634109 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.634133 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.634144 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0603 00:34:37.634161 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\n http://172.17.0.13:30968/\nI0603 00:34:37.639595 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.639632 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.639649 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.639678 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0603 00:34:37.639728 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0603 00:34:37.639750 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.639767 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30968/\nI0603 00:34:37.639810 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.639855 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.639876 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.639905 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.639945 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0603 00:34:37.645680 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.645693 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.645700 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.646099 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.646129 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0603 00:34:37.646143 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0603 00:34:37.646154 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.646164 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30968/\nI0603 00:34:37.646196 2680 log.go:172] (0xc00039c6e0) (5) Data frame sent\nI0603 00:34:37.646215 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.646229 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.646247 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.650003 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.650015 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.650020 2680 log.go:172] (0xc0006c1ae0) (3) Data frame sent\nI0603 00:34:37.650511 2680 log.go:172] (0xc0009f6630) Data frame received for 5\nI0603 00:34:37.650527 2680 log.go:172] (0xc00039c6e0) (5) Data frame handling\nI0603 00:34:37.650753 2680 log.go:172] (0xc0009f6630) Data frame received for 3\nI0603 00:34:37.650777 2680 log.go:172] (0xc0006c1ae0) (3) Data frame handling\nI0603 00:34:37.652295 2680 log.go:172] (0xc0009f6630) Data frame received for 1\nI0603 00:34:37.652316 2680 log.go:172] (0xc0006c1860) (1) Data frame handling\nI0603 00:34:37.652329 2680 log.go:172] (0xc0006c1860) (1) Data frame sent\nI0603 00:34:37.652343 2680 log.go:172] (0xc0009f6630) (0xc0006c1860) Stream removed, broadcasting: 1\nI0603 00:34:37.652357 2680 log.go:172] (0xc0009f6630) Go away received\nI0603 00:34:37.652788 2680 log.go:172] (0xc0009f6630) (0xc0006c1860) Stream removed, broadcasting: 1\nI0603 00:34:37.652820 2680 log.go:172] (0xc0009f6630) (0xc0006c1ae0) Stream removed, broadcasting: 3\nI0603 00:34:37.652842 2680 log.go:172] (0xc0009f6630) (0xc00039c6e0) Stream removed, broadcasting: 5\n" Jun 3 00:34:37.660: INFO: stdout: "\naffinity-nodeport-timeout-jwphj\naffinity-nodeport-timeout-jwphj\naffinity-nodeport-timeout-jwphj\naffinity-nodeport-timeout-jwphj\naffinity-nodeport-timeout-jwphj\naffinity-nodeport-timeout-jwphj\naffinity-nodeport-timeout-jwphj\naffinity-nodeport-timeout-jwphj\naffinity-nodeport-timeout-jwphj\naffinity-nodeport-timeout-jwphj\naffinity-nodeport-timeout-jwphj\naffinity-nodeport-timeout-jwphj\naffinity-nodeport-timeout-jwphj\naffinity-nodeport-timeout-jwphj\naffinity-nodeport-timeout-jwphj\naffinity-nodeport-timeout-jwphj" Jun 3 00:34:37.660: INFO: Received response from host: Jun 3 00:34:37.660: INFO: Received response from host: affinity-nodeport-timeout-jwphj Jun 3 00:34:37.660: INFO: Received response from host: affinity-nodeport-timeout-jwphj Jun 3 00:34:37.660: INFO: Received response from host: affinity-nodeport-timeout-jwphj Jun 3 00:34:37.660: INFO: Received response from host: affinity-nodeport-timeout-jwphj Jun 3 00:34:37.660: INFO: Received response from host: affinity-nodeport-timeout-jwphj Jun 3 00:34:37.660: INFO: Received response from host: affinity-nodeport-timeout-jwphj Jun 3 00:34:37.660: INFO: Received response from host: affinity-nodeport-timeout-jwphj Jun 3 00:34:37.660: INFO: Received response from host: affinity-nodeport-timeout-jwphj Jun 3 00:34:37.660: INFO: Received response from host: affinity-nodeport-timeout-jwphj Jun 3 00:34:37.660: INFO: Received response from host: affinity-nodeport-timeout-jwphj Jun 3 00:34:37.660: INFO: Received response from host: affinity-nodeport-timeout-jwphj Jun 3 00:34:37.660: INFO: Received response from host: affinity-nodeport-timeout-jwphj Jun 3 00:34:37.660: INFO: Received response from host: affinity-nodeport-timeout-jwphj Jun 3 00:34:37.660: INFO: Received response from host: affinity-nodeport-timeout-jwphj Jun 3 00:34:37.660: INFO: Received response from host: affinity-nodeport-timeout-jwphj Jun 3 00:34:37.660: INFO: Received response from host: affinity-nodeport-timeout-jwphj Jun 3 00:34:37.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6908 execpod-affinityxh7mk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:30968/' Jun 3 00:34:37.863: INFO: stderr: "I0603 00:34:37.768689 2700 log.go:172] (0xc0009f4f20) (0xc0003a83c0) Create stream\nI0603 00:34:37.768744 2700 log.go:172] (0xc0009f4f20) (0xc0003a83c0) Stream added, broadcasting: 1\nI0603 00:34:37.771239 2700 log.go:172] (0xc0009f4f20) Reply frame received for 1\nI0603 00:34:37.771277 2700 log.go:172] (0xc0009f4f20) (0xc000137900) Create stream\nI0603 00:34:37.771288 2700 log.go:172] (0xc0009f4f20) (0xc000137900) Stream added, broadcasting: 3\nI0603 00:34:37.772263 2700 log.go:172] (0xc0009f4f20) Reply frame received for 3\nI0603 00:34:37.772318 2700 log.go:172] (0xc0009f4f20) (0xc0003a8a00) Create stream\nI0603 00:34:37.772367 2700 log.go:172] (0xc0009f4f20) (0xc0003a8a00) Stream added, broadcasting: 5\nI0603 00:34:37.773482 2700 log.go:172] (0xc0009f4f20) Reply frame received for 5\nI0603 00:34:37.851953 2700 log.go:172] (0xc0009f4f20) Data frame received for 5\nI0603 00:34:37.851976 2700 log.go:172] (0xc0003a8a00) (5) Data frame handling\nI0603 00:34:37.851991 2700 log.go:172] (0xc0003a8a00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30968/\nI0603 00:34:37.854344 2700 log.go:172] (0xc0009f4f20) Data frame received for 3\nI0603 00:34:37.854375 2700 log.go:172] (0xc000137900) (3) Data frame handling\nI0603 00:34:37.854397 2700 log.go:172] (0xc000137900) (3) Data frame sent\nI0603 00:34:37.855118 2700 log.go:172] (0xc0009f4f20) Data frame received for 5\nI0603 00:34:37.855136 2700 log.go:172] (0xc0003a8a00) (5) Data frame handling\nI0603 00:34:37.855189 2700 log.go:172] (0xc0009f4f20) Data frame received for 3\nI0603 00:34:37.855225 2700 log.go:172] (0xc000137900) (3) Data frame handling\nI0603 00:34:37.856763 2700 log.go:172] (0xc0009f4f20) Data frame received for 1\nI0603 00:34:37.856785 2700 log.go:172] (0xc0003a83c0) (1) Data frame handling\nI0603 00:34:37.856797 2700 log.go:172] (0xc0003a83c0) (1) Data frame sent\nI0603 00:34:37.856810 2700 log.go:172] (0xc0009f4f20) (0xc0003a83c0) Stream removed, broadcasting: 1\nI0603 00:34:37.856826 2700 log.go:172] (0xc0009f4f20) Go away received\nI0603 00:34:37.857551 2700 log.go:172] (0xc0009f4f20) (0xc0003a83c0) Stream removed, broadcasting: 1\nI0603 00:34:37.857601 2700 log.go:172] (0xc0009f4f20) (0xc000137900) Stream removed, broadcasting: 3\nI0603 00:34:37.857628 2700 log.go:172] (0xc0009f4f20) (0xc0003a8a00) Stream removed, broadcasting: 5\n" Jun 3 00:34:37.863: INFO: stdout: "affinity-nodeport-timeout-jwphj" Jun 3 00:34:52.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6908 execpod-affinityxh7mk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:30968/' Jun 3 00:34:53.092: INFO: stderr: "I0603 00:34:52.984024 2719 log.go:172] (0xc000aa0000) (0xc0002feaa0) Create stream\nI0603 00:34:52.984091 2719 log.go:172] (0xc000aa0000) (0xc0002feaa0) Stream added, broadcasting: 1\nI0603 00:34:52.987428 2719 log.go:172] (0xc000aa0000) Reply frame received for 1\nI0603 00:34:52.987470 2719 log.go:172] (0xc000aa0000) (0xc0000dd860) Create stream\nI0603 00:34:52.987481 2719 log.go:172] (0xc000aa0000) (0xc0000dd860) Stream added, broadcasting: 3\nI0603 00:34:52.988331 2719 log.go:172] (0xc000aa0000) Reply frame received for 3\nI0603 00:34:52.988393 2719 log.go:172] (0xc000aa0000) (0xc0002ffcc0) Create stream\nI0603 00:34:52.988409 2719 log.go:172] (0xc000aa0000) (0xc0002ffcc0) Stream added, broadcasting: 5\nI0603 00:34:52.989288 2719 log.go:172] (0xc000aa0000) Reply frame received for 5\nI0603 00:34:53.078661 2719 log.go:172] (0xc000aa0000) Data frame received for 5\nI0603 00:34:53.078687 2719 log.go:172] (0xc0002ffcc0) (5) Data frame handling\nI0603 00:34:53.078709 2719 log.go:172] (0xc0002ffcc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30968/\nI0603 00:34:53.084933 2719 log.go:172] (0xc000aa0000) Data frame received for 3\nI0603 00:34:53.084976 2719 log.go:172] (0xc0000dd860) (3) Data frame handling\nI0603 00:34:53.085012 2719 log.go:172] (0xc0000dd860) (3) Data frame sent\nI0603 00:34:53.085814 2719 log.go:172] (0xc000aa0000) Data frame received for 5\nI0603 00:34:53.085840 2719 log.go:172] (0xc0002ffcc0) (5) Data frame handling\nI0603 00:34:53.085869 2719 log.go:172] (0xc000aa0000) Data frame received for 3\nI0603 00:34:53.085898 2719 log.go:172] (0xc0000dd860) (3) Data frame handling\nI0603 00:34:53.087773 2719 log.go:172] (0xc000aa0000) Data frame received for 1\nI0603 00:34:53.087819 2719 log.go:172] (0xc0002feaa0) (1) Data frame handling\nI0603 00:34:53.087843 2719 log.go:172] (0xc0002feaa0) (1) Data frame sent\nI0603 00:34:53.087871 2719 log.go:172] (0xc000aa0000) (0xc0002feaa0) Stream removed, broadcasting: 1\nI0603 00:34:53.088148 2719 log.go:172] (0xc000aa0000) Go away received\nI0603 00:34:53.088345 2719 log.go:172] (0xc000aa0000) (0xc0002feaa0) Stream removed, broadcasting: 1\nI0603 00:34:53.088358 2719 log.go:172] (0xc000aa0000) (0xc0000dd860) Stream removed, broadcasting: 3\nI0603 00:34:53.088365 2719 log.go:172] (0xc000aa0000) (0xc0002ffcc0) Stream removed, broadcasting: 5\n" Jun 3 00:34:53.093: INFO: stdout: "affinity-nodeport-timeout-sj9ts" Jun 3 00:34:53.093: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-6908, will wait for the garbage collector to delete the pods Jun 3 00:34:53.224: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 6.711608ms Jun 3 00:34:53.724: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 500.255338ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:35:05.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6908" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:56.220 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":212,"skipped":3428,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:35:05.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-f875d763-b63f-462b-af22-9ae4bdd50d37 STEP: Creating a pod to test consume configMaps Jun 3 00:35:05.124: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-af65fefd-f415-475b-94eb-3c59db54621b" in namespace "projected-442" to be "Succeeded or Failed" Jun 3 00:35:05.137: INFO: Pod "pod-projected-configmaps-af65fefd-f415-475b-94eb-3c59db54621b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.095328ms Jun 3 00:35:07.150: INFO: Pod "pod-projected-configmaps-af65fefd-f415-475b-94eb-3c59db54621b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026378613s Jun 3 00:35:09.154: INFO: Pod "pod-projected-configmaps-af65fefd-f415-475b-94eb-3c59db54621b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029828202s STEP: Saw pod success Jun 3 00:35:09.154: INFO: Pod "pod-projected-configmaps-af65fefd-f415-475b-94eb-3c59db54621b" satisfied condition "Succeeded or Failed" Jun 3 00:35:09.157: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-af65fefd-f415-475b-94eb-3c59db54621b container projected-configmap-volume-test: STEP: delete the pod Jun 3 00:35:09.331: INFO: Waiting for pod pod-projected-configmaps-af65fefd-f415-475b-94eb-3c59db54621b to disappear Jun 3 00:35:09.369: INFO: Pod pod-projected-configmaps-af65fefd-f415-475b-94eb-3c59db54621b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:35:09.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-442" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":213,"skipped":3472,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:35:09.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jun 3 00:35:14.045: INFO: Successfully updated pod "labelsupdate073a398e-12d1-446a-acb1-bb745bc879ec" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:35:16.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7549" for this suite. • [SLOW TEST:6.714 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":214,"skipped":3478,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:35:16.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5503 STEP: creating service affinity-clusterip in namespace services-5503 STEP: creating replication controller affinity-clusterip in namespace services-5503 I0603 00:35:16.212137 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-5503, replica count: 3 I0603 00:35:19.262531 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 00:35:22.262781 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 00:35:22.268: INFO: Creating new exec pod Jun 3 00:35:27.286: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5503 execpod-affinitybtds6 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jun 3 00:35:27.540: INFO: stderr: "I0603 00:35:27.428726 2739 log.go:172] (0xc0009a1130) (0xc000aa81e0) Create stream\nI0603 00:35:27.428776 2739 log.go:172] (0xc0009a1130) (0xc000aa81e0) Stream added, broadcasting: 1\nI0603 00:35:27.432425 2739 log.go:172] (0xc0009a1130) Reply frame received for 1\nI0603 00:35:27.432484 2739 log.go:172] (0xc0009a1130) (0xc00083c0a0) Create stream\nI0603 00:35:27.432505 2739 log.go:172] (0xc0009a1130) (0xc00083c0a0) Stream added, broadcasting: 3\nI0603 00:35:27.433623 2739 log.go:172] (0xc0009a1130) Reply frame received for 3\nI0603 00:35:27.433642 2739 log.go:172] (0xc0009a1130) (0xc0006363c0) Create stream\nI0603 00:35:27.433648 2739 log.go:172] (0xc0009a1130) (0xc0006363c0) Stream added, broadcasting: 5\nI0603 00:35:27.434374 2739 log.go:172] (0xc0009a1130) Reply frame received for 5\nI0603 00:35:27.532820 2739 log.go:172] (0xc0009a1130) Data frame received for 5\nI0603 00:35:27.532847 2739 log.go:172] (0xc0006363c0) (5) Data frame handling\nI0603 00:35:27.532863 2739 log.go:172] (0xc0006363c0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0603 00:35:27.532999 2739 log.go:172] (0xc0009a1130) Data frame received for 5\nI0603 00:35:27.533020 2739 log.go:172] (0xc0006363c0) (5) Data frame handling\nI0603 00:35:27.533039 2739 log.go:172] (0xc0006363c0) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0603 00:35:27.533471 2739 log.go:172] (0xc0009a1130) Data frame received for 3\nI0603 00:35:27.533487 2739 log.go:172] (0xc00083c0a0) (3) Data frame handling\nI0603 00:35:27.533519 2739 log.go:172] (0xc0009a1130) Data frame received for 5\nI0603 00:35:27.533541 2739 log.go:172] (0xc0006363c0) (5) Data frame handling\nI0603 00:35:27.534913 2739 log.go:172] (0xc0009a1130) Data frame received for 1\nI0603 00:35:27.534928 2739 log.go:172] (0xc000aa81e0) (1) Data frame handling\nI0603 00:35:27.534937 2739 log.go:172] (0xc000aa81e0) (1) Data frame sent\nI0603 00:35:27.534949 2739 log.go:172] (0xc0009a1130) (0xc000aa81e0) Stream removed, broadcasting: 1\nI0603 00:35:27.535005 2739 log.go:172] (0xc0009a1130) Go away received\nI0603 00:35:27.535187 2739 log.go:172] (0xc0009a1130) (0xc000aa81e0) Stream removed, broadcasting: 1\nI0603 00:35:27.535196 2739 log.go:172] (0xc0009a1130) (0xc00083c0a0) Stream removed, broadcasting: 3\nI0603 00:35:27.535201 2739 log.go:172] (0xc0009a1130) (0xc0006363c0) Stream removed, broadcasting: 5\n" Jun 3 00:35:27.540: INFO: stdout: "" Jun 3 00:35:27.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5503 execpod-affinitybtds6 -- /bin/sh -x -c nc -zv -t -w 2 10.111.4.103 80' Jun 3 00:35:27.730: INFO: stderr: "I0603 00:35:27.660727 2759 log.go:172] (0xc0009dc000) (0xc00064ef00) Create stream\nI0603 00:35:27.660780 2759 log.go:172] (0xc0009dc000) (0xc00064ef00) Stream added, broadcasting: 1\nI0603 00:35:27.662888 2759 log.go:172] (0xc0009dc000) Reply frame received for 1\nI0603 00:35:27.662940 2759 log.go:172] (0xc0009dc000) (0xc000644780) Create stream\nI0603 00:35:27.662955 2759 log.go:172] (0xc0009dc000) (0xc000644780) Stream added, broadcasting: 3\nI0603 00:35:27.663921 2759 log.go:172] (0xc0009dc000) Reply frame received for 3\nI0603 00:35:27.663947 2759 log.go:172] (0xc0009dc000) (0xc00064fea0) Create stream\nI0603 00:35:27.663956 2759 log.go:172] (0xc0009dc000) (0xc00064fea0) Stream added, broadcasting: 5\nI0603 00:35:27.664731 2759 log.go:172] (0xc0009dc000) Reply frame received for 5\nI0603 00:35:27.723241 2759 log.go:172] (0xc0009dc000) Data frame received for 5\nI0603 00:35:27.723272 2759 log.go:172] (0xc00064fea0) (5) Data frame handling\nI0603 00:35:27.723293 2759 log.go:172] (0xc00064fea0) (5) Data frame sent\n+ nc -zv -t -w 2 10.111.4.103 80\nConnection to 10.111.4.103 80 port [tcp/http] succeeded!\nI0603 00:35:27.723380 2759 log.go:172] (0xc0009dc000) Data frame received for 5\nI0603 00:35:27.723392 2759 log.go:172] (0xc00064fea0) (5) Data frame handling\nI0603 00:35:27.723412 2759 log.go:172] (0xc0009dc000) Data frame received for 3\nI0603 00:35:27.723418 2759 log.go:172] (0xc000644780) (3) Data frame handling\nI0603 00:35:27.724996 2759 log.go:172] (0xc0009dc000) Data frame received for 1\nI0603 00:35:27.725010 2759 log.go:172] (0xc00064ef00) (1) Data frame handling\nI0603 00:35:27.725018 2759 log.go:172] (0xc00064ef00) (1) Data frame sent\nI0603 00:35:27.725358 2759 log.go:172] (0xc0009dc000) (0xc00064ef00) Stream removed, broadcasting: 1\nI0603 00:35:27.725546 2759 log.go:172] (0xc0009dc000) Go away received\nI0603 00:35:27.725757 2759 log.go:172] (0xc0009dc000) (0xc00064ef00) Stream removed, broadcasting: 1\nI0603 00:35:27.725784 2759 log.go:172] (0xc0009dc000) (0xc000644780) Stream removed, broadcasting: 3\nI0603 00:35:27.725803 2759 log.go:172] (0xc0009dc000) (0xc00064fea0) Stream removed, broadcasting: 5\n" Jun 3 00:35:27.731: INFO: stdout: "" Jun 3 00:35:27.731: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5503 execpod-affinitybtds6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.111.4.103:80/ ; done' Jun 3 00:35:28.032: INFO: stderr: "I0603 00:35:27.875721 2779 log.go:172] (0xc00094c160) (0xc00080a820) Create stream\nI0603 00:35:27.875784 2779 log.go:172] (0xc00094c160) (0xc00080a820) Stream added, broadcasting: 1\nI0603 00:35:27.878884 2779 log.go:172] (0xc00094c160) Reply frame received for 1\nI0603 00:35:27.878916 2779 log.go:172] (0xc00094c160) (0xc00081bf40) Create stream\nI0603 00:35:27.878924 2779 log.go:172] (0xc00094c160) (0xc00081bf40) Stream added, broadcasting: 3\nI0603 00:35:27.880073 2779 log.go:172] (0xc00094c160) Reply frame received for 3\nI0603 00:35:27.880110 2779 log.go:172] (0xc00094c160) (0xc00082d7c0) Create stream\nI0603 00:35:27.880120 2779 log.go:172] (0xc00094c160) (0xc00082d7c0) Stream added, broadcasting: 5\nI0603 00:35:27.881086 2779 log.go:172] (0xc00094c160) Reply frame received for 5\nI0603 00:35:27.938099 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:27.938137 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\nI0603 00:35:27.938150 2779 log.go:172] (0xc00082d7c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.4.103:80/\nI0603 00:35:27.938168 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.938202 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.938219 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.944077 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.944098 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.944116 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.944755 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:27.944792 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\nI0603 00:35:27.944813 2779 log.go:172] (0xc00082d7c0) (5) Data frame sent\nI0603 00:35:27.944839 2779 log.go:172] (0xc00094c160) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.4.103:80/\nI0603 00:35:27.944857 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.944881 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.953613 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.953655 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.953673 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.953697 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:27.953714 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\nI0603 00:35:27.953736 2779 log.go:172] (0xc00082d7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.4.103:80/\nI0603 00:35:27.956830 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.956852 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.956873 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.957363 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:27.957379 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\nI0603 00:35:27.957389 2779 log.go:172] (0xc00082d7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.4.103:80/\nI0603 00:35:27.957477 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.957489 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.957502 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.960423 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.960435 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.960444 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.960829 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:27.960849 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\nI0603 00:35:27.960863 2779 log.go:172] (0xc00082d7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.4.103:80/\nI0603 00:35:27.960876 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.960884 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.960891 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.965877 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.965895 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.965965 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.966336 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.966351 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.966359 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.966376 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:27.966396 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\nI0603 00:35:27.966410 2779 log.go:172] (0xc00082d7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.4.103:80/\nI0603 00:35:27.969851 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.969877 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.969902 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.970330 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.970352 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:27.970377 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\nI0603 00:35:27.970391 2779 log.go:172] (0xc00082d7c0) (5) Data frame sent\nI0603 00:35:27.970410 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:27.970418 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.4.103:80/\nI0603 00:35:27.970439 2779 log.go:172] (0xc00082d7c0) (5) Data frame sent\nI0603 00:35:27.970454 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.970470 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.973677 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.973700 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.973715 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.974186 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.974206 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.974218 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.974228 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:27.974235 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\nI0603 00:35:27.974244 2779 log.go:172] (0xc00082d7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.4.103:80/\nI0603 00:35:27.978778 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.978804 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.978823 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.979386 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.979415 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.979427 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.979445 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:27.979452 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\nI0603 00:35:27.979459 2779 log.go:172] (0xc00082d7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.4.103:80/\nI0603 00:35:27.983487 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.983510 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.983524 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.983575 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:27.983601 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\nI0603 00:35:27.983613 2779 log.go:172] (0xc00082d7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0603 00:35:27.983699 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:27.983718 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\nI0603 00:35:27.983726 2779 log.go:172] (0xc00082d7c0) (5) Data frame sent\n 2 http://10.111.4.103:80/\nI0603 00:35:27.983743 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.983765 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.983781 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.988140 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.988161 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.988176 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.988692 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.988722 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.988744 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.988765 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:27.988776 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\nI0603 00:35:27.988787 2779 log.go:172] (0xc00082d7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.4.103:80/\nI0603 00:35:27.993340 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.993370 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.993390 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.993726 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.993740 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.993748 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:27.993760 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:27.993766 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\nI0603 00:35:27.993772 2779 log.go:172] (0xc00082d7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.4.103:80/\nI0603 00:35:27.999758 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:27.999783 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:27.999813 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:28.000398 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:28.000411 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:28.000429 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:28.000461 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\nI0603 00:35:28.000475 2779 log.go:172] (0xc00082d7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.4.103:80/\nI0603 00:35:28.000499 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:28.005803 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:28.005834 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:28.005853 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:28.006330 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:28.006353 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\nI0603 00:35:28.006366 2779 log.go:172] (0xc00082d7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.4.103:80/\nI0603 00:35:28.006386 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:28.006401 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:28.006413 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:28.009922 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:28.009943 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:28.009960 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:28.010178 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:28.010208 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:28.010245 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:28.010270 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:28.010281 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\nI0603 00:35:28.010305 2779 log.go:172] (0xc00082d7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.4.103:80/\nI0603 00:35:28.014245 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:28.014270 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:28.014289 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:28.015115 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:28.015143 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\nI0603 00:35:28.015157 2779 log.go:172] (0xc00082d7c0) (5) Data frame sent\nI0603 00:35:28.015170 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:28.015180 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:28.015201 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.4.103:80/\nI0603 00:35:28.021372 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:28.021414 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:28.021446 2779 log.go:172] (0xc00081bf40) (3) Data frame sent\nI0603 00:35:28.022048 2779 log.go:172] (0xc00094c160) Data frame received for 3\nI0603 00:35:28.022074 2779 log.go:172] (0xc00081bf40) (3) Data frame handling\nI0603 00:35:28.024333 2779 log.go:172] (0xc00094c160) Data frame received for 5\nI0603 00:35:28.024366 2779 log.go:172] (0xc00082d7c0) (5) Data frame handling\nI0603 00:35:28.025982 2779 log.go:172] (0xc00094c160) Data frame received for 1\nI0603 00:35:28.026007 2779 log.go:172] (0xc00080a820) (1) Data frame handling\nI0603 00:35:28.026022 2779 log.go:172] (0xc00080a820) (1) Data frame sent\nI0603 00:35:28.026071 2779 log.go:172] (0xc00094c160) (0xc00080a820) Stream removed, broadcasting: 1\nI0603 00:35:28.026105 2779 log.go:172] (0xc00094c160) Go away received\nI0603 00:35:28.026391 2779 log.go:172] (0xc00094c160) (0xc00080a820) Stream removed, broadcasting: 1\nI0603 00:35:28.026409 2779 log.go:172] (0xc00094c160) (0xc00081bf40) Stream removed, broadcasting: 3\nI0603 00:35:28.026417 2779 log.go:172] (0xc00094c160) (0xc00082d7c0) Stream removed, broadcasting: 5\n" Jun 3 00:35:28.032: INFO: stdout: "\naffinity-clusterip-z595p\naffinity-clusterip-z595p\naffinity-clusterip-z595p\naffinity-clusterip-z595p\naffinity-clusterip-z595p\naffinity-clusterip-z595p\naffinity-clusterip-z595p\naffinity-clusterip-z595p\naffinity-clusterip-z595p\naffinity-clusterip-z595p\naffinity-clusterip-z595p\naffinity-clusterip-z595p\naffinity-clusterip-z595p\naffinity-clusterip-z595p\naffinity-clusterip-z595p\naffinity-clusterip-z595p" Jun 3 00:35:28.032: INFO: Received response from host: Jun 3 00:35:28.032: INFO: Received response from host: affinity-clusterip-z595p Jun 3 00:35:28.032: INFO: Received response from host: affinity-clusterip-z595p Jun 3 00:35:28.032: INFO: Received response from host: affinity-clusterip-z595p Jun 3 00:35:28.032: INFO: Received response from host: affinity-clusterip-z595p Jun 3 00:35:28.032: INFO: Received response from host: affinity-clusterip-z595p Jun 3 00:35:28.032: INFO: Received response from host: affinity-clusterip-z595p Jun 3 00:35:28.032: INFO: Received response from host: affinity-clusterip-z595p Jun 3 00:35:28.032: INFO: Received response from host: affinity-clusterip-z595p Jun 3 00:35:28.032: INFO: Received response from host: affinity-clusterip-z595p Jun 3 00:35:28.032: INFO: Received response from host: affinity-clusterip-z595p Jun 3 00:35:28.032: INFO: Received response from host: affinity-clusterip-z595p Jun 3 00:35:28.032: INFO: Received response from host: affinity-clusterip-z595p Jun 3 00:35:28.032: INFO: Received response from host: affinity-clusterip-z595p Jun 3 00:35:28.032: INFO: Received response from host: affinity-clusterip-z595p Jun 3 00:35:28.032: INFO: Received response from host: affinity-clusterip-z595p Jun 3 00:35:28.032: INFO: Received response from host: affinity-clusterip-z595p Jun 3 00:35:28.032: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-5503, will wait for the garbage collector to delete the pods Jun 3 00:35:28.208: INFO: Deleting ReplicationController affinity-clusterip took: 83.805601ms Jun 3 00:35:28.608: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.268544ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:35:44.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5503" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:28.891 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":215,"skipped":3506,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:35:44.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jun 3 00:35:45.052: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jun 3 00:35:56.677: INFO: >>> kubeConfig: /root/.kube/config Jun 3 00:35:58.581: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:36:09.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8851" for this suite. • [SLOW TEST:24.317 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":216,"skipped":3506,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:36:09.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:36:09.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7823" for this suite. STEP: Destroying namespace "nspatchtest-4abb9ba8-15aa-4edb-a9ff-98f67834e601-2422" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":217,"skipped":3539,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:36:09.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2607 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2607 STEP: Creating statefulset with conflicting port in namespace statefulset-2607 STEP: Waiting until pod test-pod will start running in namespace statefulset-2607 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2607 Jun 3 00:36:15.748: INFO: Observed stateful pod in namespace: statefulset-2607, name: ss-0, uid: 5672d750-3010-45a6-b2ce-663f4f864f1f, status phase: Pending. Waiting for statefulset controller to delete. Jun 3 00:36:16.303: INFO: Observed stateful pod in namespace: statefulset-2607, name: ss-0, uid: 5672d750-3010-45a6-b2ce-663f4f864f1f, status phase: Failed. Waiting for statefulset controller to delete. Jun 3 00:36:16.368: INFO: Observed stateful pod in namespace: statefulset-2607, name: ss-0, uid: 5672d750-3010-45a6-b2ce-663f4f864f1f, status phase: Failed. Waiting for statefulset controller to delete. Jun 3 00:36:16.418: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2607 STEP: Removing pod with conflicting port in namespace statefulset-2607 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2607 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 3 00:36:20.563: INFO: Deleting all statefulset in ns statefulset-2607 Jun 3 00:36:20.565: INFO: Scaling statefulset ss to 0 Jun 3 00:36:30.582: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 00:36:30.585: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:36:30.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2607" for this suite. • [SLOW TEST:21.020 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":218,"skipped":3550,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:36:30.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:36:34.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8751" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":219,"skipped":3553,"failed":0} SSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:36:34.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-27817e4b-3418-48f0-8a5d-ca7686317c72 STEP: Creating secret with name secret-projected-all-test-volume-51015e0c-7d1b-49c1-b993-f07d834e07d9 STEP: Creating a pod to test Check all projections for projected volume plugin Jun 3 00:36:35.383: INFO: Waiting up to 5m0s for pod "projected-volume-26d2bc34-7fad-4dc1-bca3-0348b5acda3a" in namespace "projected-423" to be "Succeeded or Failed" Jun 3 00:36:35.394: INFO: Pod "projected-volume-26d2bc34-7fad-4dc1-bca3-0348b5acda3a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.412667ms Jun 3 00:36:37.398: INFO: Pod "projected-volume-26d2bc34-7fad-4dc1-bca3-0348b5acda3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014604513s Jun 3 00:36:39.402: INFO: Pod "projected-volume-26d2bc34-7fad-4dc1-bca3-0348b5acda3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019033932s STEP: Saw pod success Jun 3 00:36:39.402: INFO: Pod "projected-volume-26d2bc34-7fad-4dc1-bca3-0348b5acda3a" satisfied condition "Succeeded or Failed" Jun 3 00:36:39.405: INFO: Trying to get logs from node latest-worker2 pod projected-volume-26d2bc34-7fad-4dc1-bca3-0348b5acda3a container projected-all-volume-test: STEP: delete the pod Jun 3 00:36:39.475: INFO: Waiting for pod projected-volume-26d2bc34-7fad-4dc1-bca3-0348b5acda3a to disappear Jun 3 00:36:39.529: INFO: Pod projected-volume-26d2bc34-7fad-4dc1-bca3-0348b5acda3a no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:36:39.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-423" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":220,"skipped":3557,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:36:39.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 Jun 3 00:36:39.638: INFO: Waiting up to 1m0s for all nodes to be ready Jun 3 00:37:39.661: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:37:39.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Jun 3 00:37:43.812: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:38:03.950: INFO: pods created so far: [1 1 1] Jun 3 00:38:03.951: INFO: length of pods created so far: 3 Jun 3 00:38:16.008: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:38:23.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-6244" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:38:23.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3987" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:103.592 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":221,"skipped":3570,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:38:23.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-992f76cf-7409-4335-84da-a895a104f02f STEP: Creating the pod STEP: Updating configmap configmap-test-upd-992f76cf-7409-4335-84da-a895a104f02f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:38:29.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7509" for this suite. • [SLOW TEST:6.323 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":222,"skipped":3574,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:38:29.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 3 00:38:29.829: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3832 /api/v1/namespaces/watch-3832/configmaps/e2e-watch-test-configmap-a f5273c9a-49b8-48c8-b184-03b5f95f908f 9814768 0 2020-06-03 00:38:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-03 00:38:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 00:38:29.830: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3832 /api/v1/namespaces/watch-3832/configmaps/e2e-watch-test-configmap-a f5273c9a-49b8-48c8-b184-03b5f95f908f 9814768 0 2020-06-03 00:38:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-03 00:38:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 3 00:38:39.837: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3832 /api/v1/namespaces/watch-3832/configmaps/e2e-watch-test-configmap-a f5273c9a-49b8-48c8-b184-03b5f95f908f 9814817 0 2020-06-03 00:38:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-03 00:38:39 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 00:38:39.838: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3832 /api/v1/namespaces/watch-3832/configmaps/e2e-watch-test-configmap-a f5273c9a-49b8-48c8-b184-03b5f95f908f 9814817 0 2020-06-03 00:38:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-03 00:38:39 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 3 00:38:49.847: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3832 /api/v1/namespaces/watch-3832/configmaps/e2e-watch-test-configmap-a f5273c9a-49b8-48c8-b184-03b5f95f908f 9814849 0 2020-06-03 00:38:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-03 00:38:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 00:38:49.847: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3832 /api/v1/namespaces/watch-3832/configmaps/e2e-watch-test-configmap-a f5273c9a-49b8-48c8-b184-03b5f95f908f 9814849 0 2020-06-03 00:38:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-03 00:38:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 3 00:38:59.854: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3832 /api/v1/namespaces/watch-3832/configmaps/e2e-watch-test-configmap-a f5273c9a-49b8-48c8-b184-03b5f95f908f 9814879 0 2020-06-03 00:38:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-03 00:38:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 00:38:59.854: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3832 /api/v1/namespaces/watch-3832/configmaps/e2e-watch-test-configmap-a f5273c9a-49b8-48c8-b184-03b5f95f908f 9814879 0 2020-06-03 00:38:29 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-03 00:38:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 3 00:39:09.862: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3832 /api/v1/namespaces/watch-3832/configmaps/e2e-watch-test-configmap-b 0a30ae5f-ff22-4f02-a49a-d9aef596767a 9814909 0 2020-06-03 00:39:09 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-06-03 00:39:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 00:39:09.863: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3832 /api/v1/namespaces/watch-3832/configmaps/e2e-watch-test-configmap-b 0a30ae5f-ff22-4f02-a49a-d9aef596767a 9814909 0 2020-06-03 00:39:09 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-06-03 00:39:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 3 00:39:19.870: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3832 /api/v1/namespaces/watch-3832/configmaps/e2e-watch-test-configmap-b 0a30ae5f-ff22-4f02-a49a-d9aef596767a 9814939 0 2020-06-03 00:39:09 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-06-03 00:39:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 00:39:19.870: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3832 /api/v1/namespaces/watch-3832/configmaps/e2e-watch-test-configmap-b 0a30ae5f-ff22-4f02-a49a-d9aef596767a 9814939 0 2020-06-03 00:39:09 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-06-03 00:39:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:39:29.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3832" for this suite. • [SLOW TEST:60.429 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":223,"skipped":3586,"failed":0} SSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:39:29.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Jun 3 00:39:34.478: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7090 pod-service-account-8c7c8c4f-25a6-4a90-9242-258bdad15ef0 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jun 3 00:39:37.575: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7090 pod-service-account-8c7c8c4f-25a6-4a90-9242-258bdad15ef0 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jun 3 00:39:37.788: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7090 pod-service-account-8c7c8c4f-25a6-4a90-9242-258bdad15ef0 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:39:38.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7090" for this suite. • [SLOW TEST:8.130 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":224,"skipped":3594,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:39:38.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 3 00:39:38.064: INFO: Waiting up to 5m0s for pod "pod-0a8df886-69e2-4f53-a254-d3092151004d" in namespace "emptydir-998" to be "Succeeded or Failed" Jun 3 00:39:38.067: INFO: Pod "pod-0a8df886-69e2-4f53-a254-d3092151004d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.926594ms Jun 3 00:39:40.074: INFO: Pod "pod-0a8df886-69e2-4f53-a254-d3092151004d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010689915s Jun 3 00:39:42.078: INFO: Pod "pod-0a8df886-69e2-4f53-a254-d3092151004d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014305867s STEP: Saw pod success Jun 3 00:39:42.078: INFO: Pod "pod-0a8df886-69e2-4f53-a254-d3092151004d" satisfied condition "Succeeded or Failed" Jun 3 00:39:42.081: INFO: Trying to get logs from node latest-worker pod pod-0a8df886-69e2-4f53-a254-d3092151004d container test-container: STEP: delete the pod Jun 3 00:39:42.140: INFO: Waiting for pod pod-0a8df886-69e2-4f53-a254-d3092151004d to disappear Jun 3 00:39:42.146: INFO: Pod pod-0a8df886-69e2-4f53-a254-d3092151004d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:39:42.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-998" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":225,"skipped":3596,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:39:42.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 3 00:39:50.400: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 3 00:39:50.422: INFO: Pod pod-with-prestop-http-hook still exists Jun 3 00:39:52.423: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 3 00:39:52.427: INFO: Pod pod-with-prestop-http-hook still exists Jun 3 00:39:54.422: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 3 00:39:54.427: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:39:54.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7968" for this suite. • [SLOW TEST:12.286 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":226,"skipped":3616,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:39:54.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:39:54.554: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 3 00:39:54.587: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:39:54.590: INFO: Number of nodes with available pods: 0 Jun 3 00:39:54.590: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:39:55.596: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:39:55.600: INFO: Number of nodes with available pods: 0 Jun 3 00:39:55.600: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:39:56.595: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:39:56.599: INFO: Number of nodes with available pods: 0 Jun 3 00:39:56.599: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:39:57.740: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:39:57.744: INFO: Number of nodes with available pods: 0 Jun 3 00:39:57.744: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:39:58.595: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:39:58.599: INFO: Number of nodes with available pods: 0 Jun 3 00:39:58.599: INFO: Node latest-worker is running more than one daemon pod Jun 3 00:39:59.669: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:39:59.674: INFO: Number of nodes with available pods: 2 Jun 3 00:39:59.674: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 3 00:40:00.150: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:00.150: INFO: Wrong image for pod: daemon-set-p7b7l. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:00.194: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:01.199: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:01.200: INFO: Wrong image for pod: daemon-set-p7b7l. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:01.204: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:02.200: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:02.200: INFO: Wrong image for pod: daemon-set-p7b7l. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:02.204: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:03.199: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:03.199: INFO: Wrong image for pod: daemon-set-p7b7l. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:03.199: INFO: Pod daemon-set-p7b7l is not available Jun 3 00:40:03.203: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:04.219: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:04.219: INFO: Wrong image for pod: daemon-set-p7b7l. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:04.219: INFO: Pod daemon-set-p7b7l is not available Jun 3 00:40:04.223: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:05.213: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:05.213: INFO: Wrong image for pod: daemon-set-p7b7l. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:05.213: INFO: Pod daemon-set-p7b7l is not available Jun 3 00:40:05.218: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:06.201: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:06.201: INFO: Wrong image for pod: daemon-set-p7b7l. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:06.201: INFO: Pod daemon-set-p7b7l is not available Jun 3 00:40:06.205: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:07.199: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:07.199: INFO: Wrong image for pod: daemon-set-p7b7l. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:07.199: INFO: Pod daemon-set-p7b7l is not available Jun 3 00:40:07.204: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:08.198: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:08.198: INFO: Wrong image for pod: daemon-set-p7b7l. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:08.198: INFO: Pod daemon-set-p7b7l is not available Jun 3 00:40:08.200: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:09.199: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:09.199: INFO: Wrong image for pod: daemon-set-p7b7l. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:09.199: INFO: Pod daemon-set-p7b7l is not available Jun 3 00:40:09.201: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:10.207: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:10.207: INFO: Wrong image for pod: daemon-set-p7b7l. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:10.207: INFO: Pod daemon-set-p7b7l is not available Jun 3 00:40:10.224: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:11.199: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:11.199: INFO: Wrong image for pod: daemon-set-p7b7l. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:11.199: INFO: Pod daemon-set-p7b7l is not available Jun 3 00:40:11.204: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:12.199: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:12.199: INFO: Wrong image for pod: daemon-set-p7b7l. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:12.199: INFO: Pod daemon-set-p7b7l is not available Jun 3 00:40:12.204: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:13.199: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:13.199: INFO: Wrong image for pod: daemon-set-p7b7l. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:13.199: INFO: Pod daemon-set-p7b7l is not available Jun 3 00:40:13.204: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:14.199: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:14.199: INFO: Wrong image for pod: daemon-set-p7b7l. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:14.199: INFO: Pod daemon-set-p7b7l is not available Jun 3 00:40:14.204: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:15.199: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:15.199: INFO: Pod daemon-set-qk7xk is not available Jun 3 00:40:15.204: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:16.200: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:16.200: INFO: Pod daemon-set-qk7xk is not available Jun 3 00:40:16.228: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:17.249: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:17.249: INFO: Pod daemon-set-qk7xk is not available Jun 3 00:40:17.253: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:18.199: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:18.203: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:19.209: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:19.213: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:20.199: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:20.199: INFO: Pod daemon-set-5xz5h is not available Jun 3 00:40:20.202: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:21.199: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:21.199: INFO: Pod daemon-set-5xz5h is not available Jun 3 00:40:21.204: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:22.199: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:22.199: INFO: Pod daemon-set-5xz5h is not available Jun 3 00:40:22.204: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:23.199: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:23.199: INFO: Pod daemon-set-5xz5h is not available Jun 3 00:40:23.204: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:24.200: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:24.200: INFO: Pod daemon-set-5xz5h is not available Jun 3 00:40:24.205: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:25.200: INFO: Wrong image for pod: daemon-set-5xz5h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 3 00:40:25.200: INFO: Pod daemon-set-5xz5h is not available Jun 3 00:40:25.206: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:26.205: INFO: Pod daemon-set-4cmnv is not available Jun 3 00:40:26.208: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 3 00:40:26.211: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:26.212: INFO: Number of nodes with available pods: 1 Jun 3 00:40:26.212: INFO: Node latest-worker2 is running more than one daemon pod Jun 3 00:40:27.219: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:27.222: INFO: Number of nodes with available pods: 1 Jun 3 00:40:27.222: INFO: Node latest-worker2 is running more than one daemon pod Jun 3 00:40:28.219: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:28.223: INFO: Number of nodes with available pods: 1 Jun 3 00:40:28.223: INFO: Node latest-worker2 is running more than one daemon pod Jun 3 00:40:29.219: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 00:40:29.222: INFO: Number of nodes with available pods: 2 Jun 3 00:40:29.222: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7623, will wait for the garbage collector to delete the pods Jun 3 00:40:29.294: INFO: Deleting DaemonSet.extensions daemon-set took: 7.004532ms Jun 3 00:40:29.595: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.2675ms Jun 3 00:40:35.298: INFO: Number of nodes with available pods: 0 Jun 3 00:40:35.298: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 00:40:35.301: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7623/daemonsets","resourceVersion":"9815335"},"items":null} Jun 3 00:40:35.304: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7623/pods","resourceVersion":"9815335"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:40:35.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7623" for this suite. • [SLOW TEST:40.880 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":227,"skipped":3628,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:40:35.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Jun 3 00:40:35.374: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Jun 3 00:40:35.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9962' Jun 3 00:40:35.763: INFO: stderr: "" Jun 3 00:40:35.763: INFO: stdout: "service/agnhost-slave created\n" Jun 3 00:40:35.763: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Jun 3 00:40:35.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9962' Jun 3 00:40:36.083: INFO: stderr: "" Jun 3 00:40:36.083: INFO: stdout: "service/agnhost-master created\n" Jun 3 00:40:36.083: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 3 00:40:36.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9962' Jun 3 00:40:36.350: INFO: stderr: "" Jun 3 00:40:36.350: INFO: stdout: "service/frontend created\n" Jun 3 00:40:36.350: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jun 3 00:40:36.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9962' Jun 3 00:40:36.646: INFO: stderr: "" Jun 3 00:40:36.646: INFO: stdout: "deployment.apps/frontend created\n" Jun 3 00:40:36.646: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 3 00:40:36.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9962' Jun 3 00:40:36.924: INFO: stderr: "" Jun 3 00:40:36.924: INFO: stdout: "deployment.apps/agnhost-master created\n" Jun 3 00:40:36.924: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 3 00:40:36.924: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9962' Jun 3 00:40:37.343: INFO: stderr: "" Jun 3 00:40:37.343: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Jun 3 00:40:37.343: INFO: Waiting for all frontend pods to be Running. Jun 3 00:40:47.394: INFO: Waiting for frontend to serve content. Jun 3 00:40:47.402: INFO: Trying to add a new entry to the guestbook. Jun 3 00:40:47.417: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 3 00:40:47.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9962' Jun 3 00:40:47.584: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 00:40:47.584: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Jun 3 00:40:47.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9962' Jun 3 00:40:47.775: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 00:40:47.775: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jun 3 00:40:47.775: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9962' Jun 3 00:40:47.945: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 00:40:47.945: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 3 00:40:47.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9962' Jun 3 00:40:48.070: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 00:40:48.070: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 3 00:40:48.070: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9962' Jun 3 00:40:48.450: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 00:40:48.450: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jun 3 00:40:48.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9962' Jun 3 00:40:48.640: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 00:40:48.640: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:40:48.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9962" for this suite. • [SLOW TEST:13.351 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":228,"skipped":3630,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:40:48.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:40:49.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jun 3 00:40:50.138: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-03T00:40:50Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-03T00:40:50Z]] name:name1 resourceVersion:9815565 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:e84503e0-ee69-4e0b-87e5-baa0d41cca78] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jun 3 00:41:00.144: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-03T00:41:00Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-03T00:41:00Z]] name:name2 resourceVersion:9815648 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8ccb2b76-33b6-44ec-aa7c-33a05c09b26f] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jun 3 00:41:10.150: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-03T00:40:50Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-03T00:41:10Z]] name:name1 resourceVersion:9815682 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:e84503e0-ee69-4e0b-87e5-baa0d41cca78] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jun 3 00:41:20.157: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-03T00:41:00Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-03T00:41:20Z]] name:name2 resourceVersion:9815712 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8ccb2b76-33b6-44ec-aa7c-33a05c09b26f] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jun 3 00:41:30.166: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-03T00:40:50Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-03T00:41:10Z]] name:name1 resourceVersion:9815744 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:e84503e0-ee69-4e0b-87e5-baa0d41cca78] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jun 3 00:41:40.206: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-03T00:41:00Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-03T00:41:20Z]] name:name2 resourceVersion:9815775 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8ccb2b76-33b6-44ec-aa7c-33a05c09b26f] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:41:50.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-687" for this suite. • [SLOW TEST:62.053 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":229,"skipped":3644,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:41:50.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:41:51.330: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 3 00:41:54.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3251 create -f -' Jun 3 00:41:57.407: INFO: stderr: "" Jun 3 00:41:57.407: INFO: stdout: "e2e-test-crd-publish-openapi-5882-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 3 00:41:57.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3251 delete e2e-test-crd-publish-openapi-5882-crds test-cr' Jun 3 00:41:57.527: INFO: stderr: "" Jun 3 00:41:57.527: INFO: stdout: "e2e-test-crd-publish-openapi-5882-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jun 3 00:41:57.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3251 apply -f -' Jun 3 00:41:57.779: INFO: stderr: "" Jun 3 00:41:57.779: INFO: stdout: "e2e-test-crd-publish-openapi-5882-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 3 00:41:57.779: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3251 delete e2e-test-crd-publish-openapi-5882-crds test-cr' Jun 3 00:41:57.892: INFO: stderr: "" Jun 3 00:41:57.892: INFO: stdout: "e2e-test-crd-publish-openapi-5882-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 3 00:41:57.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5882-crds' Jun 3 00:41:58.159: INFO: stderr: "" Jun 3 00:41:58.159: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5882-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:42:01.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3251" for this suite. • [SLOW TEST:10.387 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":230,"skipped":3648,"failed":0} SSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:42:01.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-522/configmap-test-42b6357e-91b2-48df-9287-480e3d4c9275 STEP: Creating a pod to test consume configMaps Jun 3 00:42:01.249: INFO: Waiting up to 5m0s for pod "pod-configmaps-138c5c8f-d894-4678-856e-ee475da52c1c" in namespace "configmap-522" to be "Succeeded or Failed" Jun 3 00:42:01.254: INFO: Pod "pod-configmaps-138c5c8f-d894-4678-856e-ee475da52c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.561817ms Jun 3 00:42:03.258: INFO: Pod "pod-configmaps-138c5c8f-d894-4678-856e-ee475da52c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008769264s Jun 3 00:42:05.291: INFO: Pod "pod-configmaps-138c5c8f-d894-4678-856e-ee475da52c1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042042358s STEP: Saw pod success Jun 3 00:42:05.291: INFO: Pod "pod-configmaps-138c5c8f-d894-4678-856e-ee475da52c1c" satisfied condition "Succeeded or Failed" Jun 3 00:42:05.302: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-138c5c8f-d894-4678-856e-ee475da52c1c container env-test: STEP: delete the pod Jun 3 00:42:05.368: INFO: Waiting for pod pod-configmaps-138c5c8f-d894-4678-856e-ee475da52c1c to disappear Jun 3 00:42:05.381: INFO: Pod pod-configmaps-138c5c8f-d894-4678-856e-ee475da52c1c no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:42:05.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-522" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":231,"skipped":3657,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:42:05.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Jun 3 00:42:05.460: INFO: Waiting up to 5m0s for pod "client-containers-b24b89c1-bae0-460e-953d-72b9d9caec5b" in namespace "containers-9904" to be "Succeeded or Failed" Jun 3 00:42:05.492: INFO: Pod "client-containers-b24b89c1-bae0-460e-953d-72b9d9caec5b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.220901ms Jun 3 00:42:07.609: INFO: Pod "client-containers-b24b89c1-bae0-460e-953d-72b9d9caec5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14881977s Jun 3 00:42:09.621: INFO: Pod "client-containers-b24b89c1-bae0-460e-953d-72b9d9caec5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.160414499s STEP: Saw pod success Jun 3 00:42:09.621: INFO: Pod "client-containers-b24b89c1-bae0-460e-953d-72b9d9caec5b" satisfied condition "Succeeded or Failed" Jun 3 00:42:09.624: INFO: Trying to get logs from node latest-worker2 pod client-containers-b24b89c1-bae0-460e-953d-72b9d9caec5b container test-container: STEP: delete the pod Jun 3 00:42:09.643: INFO: Waiting for pod client-containers-b24b89c1-bae0-460e-953d-72b9d9caec5b to disappear Jun 3 00:42:09.647: INFO: Pod client-containers-b24b89c1-bae0-460e-953d-72b9d9caec5b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:42:09.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9904" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":232,"skipped":3659,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:42:09.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:42:09.716: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 3 00:42:09.774: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 3 00:42:14.794: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 3 00:42:14.795: INFO: Creating deployment "test-rolling-update-deployment" Jun 3 00:42:14.799: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 3 00:42:14.813: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 3 00:42:16.821: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 3 00:42:16.824: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741734, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741734, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741734, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741734, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 00:42:18.828: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jun 3 00:42:18.839: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-1176 /apis/apps/v1/namespaces/deployment-1176/deployments/test-rolling-update-deployment 916364e7-3982-4a07-9da7-81dff1fc3e45 9816010 1 2020-06-03 00:42:14 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-06-03 00:42:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-03 00:42:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004ae9288 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-06-03 00:42:14 +0000 UTC,LastTransitionTime:2020-06-03 00:42:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-06-03 00:42:18 +0000 UTC,LastTransitionTime:2020-06-03 00:42:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 3 00:42:18.842: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-1176 /apis/apps/v1/namespaces/deployment-1176/replicasets/test-rolling-update-deployment-df7bb669b aeda3c57-5a94-4410-afae-a4c0bdf56a64 9815997 1 2020-06-03 00:42:14 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 916364e7-3982-4a07-9da7-81dff1fc3e45 0xc004c117b0 0xc004c117b1}] [] [{kube-controller-manager Update apps/v1 2020-06-03 00:42:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"916364e7-3982-4a07-9da7-81dff1fc3e45\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004c11838 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 3 00:42:18.842: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 3 00:42:18.842: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-1176 /apis/apps/v1/namespaces/deployment-1176/replicasets/test-rolling-update-controller 065da57d-628f-4627-973f-92c12ca84fd3 9816009 2 2020-06-03 00:42:09 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 916364e7-3982-4a07-9da7-81dff1fc3e45 0xc004c1167f 0xc004c11690}] [] [{e2e.test Update apps/v1 2020-06-03 00:42:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-03 00:42:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"916364e7-3982-4a07-9da7-81dff1fc3e45\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004c11748 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 3 00:42:18.845: INFO: Pod "test-rolling-update-deployment-df7bb669b-792rl" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-792rl test-rolling-update-deployment-df7bb669b- deployment-1176 /api/v1/namespaces/deployment-1176/pods/test-rolling-update-deployment-df7bb669b-792rl 3e8d8268-9694-421d-8f27-878bc12d02de 9815996 0 2020-06-03 00:42:14 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b aeda3c57-5a94-4410-afae-a4c0bdf56a64 0xc004ae9850 0xc004ae9851}] [] [{kube-controller-manager Update v1 2020-06-03 00:42:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeda3c57-5a94-4410-afae-a4c0bdf56a64\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-03 00:42:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.177\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hs5tb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hs5tb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hs5tb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 00:42:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 00:42:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 00:42:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 00:42:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.177,StartTime:2020-06-03 00:42:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-03 00:42:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://6008fca2d79e5c514b16050061b0ea4d25ffbddec6924ea69d6c102bfc86e7b3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.177,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:42:18.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1176" for this suite. • [SLOW TEST:9.199 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":233,"skipped":3668,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:42:18.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7880.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7880.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7880.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7880.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7880.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7880.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7880.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7880.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7880.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7880.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7880.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 152.225.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.225.152_udp@PTR;check="$$(dig +tcp +noall +answer +search 152.225.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.225.152_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7880.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7880.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7880.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7880.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7880.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7880.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7880.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7880.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7880.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7880.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7880.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 152.225.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.225.152_udp@PTR;check="$$(dig +tcp +noall +answer +search 152.225.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.225.152_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 00:42:25.340: INFO: Unable to read wheezy_udp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:25.343: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:25.347: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:25.350: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:25.369: INFO: Unable to read jessie_udp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:25.372: INFO: Unable to read jessie_tcp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:25.406: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:25.409: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:25.425: INFO: Lookups using dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02 failed for: [wheezy_udp@dns-test-service.dns-7880.svc.cluster.local wheezy_tcp@dns-test-service.dns-7880.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local jessie_udp@dns-test-service.dns-7880.svc.cluster.local jessie_tcp@dns-test-service.dns-7880.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local] Jun 3 00:42:30.430: INFO: Unable to read wheezy_udp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:30.433: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:30.437: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:30.440: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:30.463: INFO: Unable to read jessie_udp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:30.466: INFO: Unable to read jessie_tcp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:30.470: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:30.472: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:30.487: INFO: Lookups using dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02 failed for: [wheezy_udp@dns-test-service.dns-7880.svc.cluster.local wheezy_tcp@dns-test-service.dns-7880.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local jessie_udp@dns-test-service.dns-7880.svc.cluster.local jessie_tcp@dns-test-service.dns-7880.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local] Jun 3 00:42:35.429: INFO: Unable to read wheezy_udp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:35.432: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:35.436: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:35.440: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:35.463: INFO: Unable to read jessie_udp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:35.466: INFO: Unable to read jessie_tcp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:35.469: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:35.472: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:35.491: INFO: Lookups using dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02 failed for: [wheezy_udp@dns-test-service.dns-7880.svc.cluster.local wheezy_tcp@dns-test-service.dns-7880.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local jessie_udp@dns-test-service.dns-7880.svc.cluster.local jessie_tcp@dns-test-service.dns-7880.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local] Jun 3 00:42:40.429: INFO: Unable to read wheezy_udp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:40.432: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:40.434: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:40.436: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:40.456: INFO: Unable to read jessie_udp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:40.459: INFO: Unable to read jessie_tcp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:40.461: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:40.465: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:40.485: INFO: Lookups using dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02 failed for: [wheezy_udp@dns-test-service.dns-7880.svc.cluster.local wheezy_tcp@dns-test-service.dns-7880.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local jessie_udp@dns-test-service.dns-7880.svc.cluster.local jessie_tcp@dns-test-service.dns-7880.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local] Jun 3 00:42:45.429: INFO: Unable to read wheezy_udp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:45.432: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:45.435: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:45.437: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:45.456: INFO: Unable to read jessie_udp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:45.459: INFO: Unable to read jessie_tcp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:45.462: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:45.465: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:45.483: INFO: Lookups using dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02 failed for: [wheezy_udp@dns-test-service.dns-7880.svc.cluster.local wheezy_tcp@dns-test-service.dns-7880.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local jessie_udp@dns-test-service.dns-7880.svc.cluster.local jessie_tcp@dns-test-service.dns-7880.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local] Jun 3 00:42:50.430: INFO: Unable to read wheezy_udp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:50.433: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:50.460: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:50.463: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:50.484: INFO: Unable to read jessie_udp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:50.487: INFO: Unable to read jessie_tcp@dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:50.489: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:50.492: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local from pod dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02: the server could not find the requested resource (get pods dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02) Jun 3 00:42:50.509: INFO: Lookups using dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02 failed for: [wheezy_udp@dns-test-service.dns-7880.svc.cluster.local wheezy_tcp@dns-test-service.dns-7880.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local jessie_udp@dns-test-service.dns-7880.svc.cluster.local jessie_tcp@dns-test-service.dns-7880.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7880.svc.cluster.local] Jun 3 00:42:55.495: INFO: DNS probes using dns-7880/dns-test-67e37fbb-a7c2-4f8a-9534-019da9400e02 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:42:56.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7880" for this suite. • [SLOW TEST:37.706 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":234,"skipped":3672,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:42:56.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 00:42:57.283: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 00:42:59.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741777, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741777, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741777, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726741777, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 00:43:02.422: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:43:14.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1084" for this suite. STEP: Destroying namespace "webhook-1084-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.128 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":235,"skipped":3680,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:43:14.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 3 00:43:14.774: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab042f9d-2566-42db-a498-e4395880c746" in namespace "downward-api-8793" to be "Succeeded or Failed" Jun 3 00:43:14.778: INFO: Pod "downwardapi-volume-ab042f9d-2566-42db-a498-e4395880c746": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014646ms Jun 3 00:43:16.783: INFO: Pod "downwardapi-volume-ab042f9d-2566-42db-a498-e4395880c746": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008210118s Jun 3 00:43:18.787: INFO: Pod "downwardapi-volume-ab042f9d-2566-42db-a498-e4395880c746": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012812289s STEP: Saw pod success Jun 3 00:43:18.787: INFO: Pod "downwardapi-volume-ab042f9d-2566-42db-a498-e4395880c746" satisfied condition "Succeeded or Failed" Jun 3 00:43:18.790: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ab042f9d-2566-42db-a498-e4395880c746 container client-container: STEP: delete the pod Jun 3 00:43:18.828: INFO: Waiting for pod downwardapi-volume-ab042f9d-2566-42db-a498-e4395880c746 to disappear Jun 3 00:43:18.838: INFO: Pod downwardapi-volume-ab042f9d-2566-42db-a498-e4395880c746 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:43:18.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8793" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":236,"skipped":3689,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:43:18.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 3 00:43:18.930: INFO: Waiting up to 5m0s for pod "downwardapi-volume-edfb6804-661a-4f54-bad5-1dbf64d9a9ba" in namespace "projected-6222" to be "Succeeded or Failed" Jun 3 00:43:18.934: INFO: Pod "downwardapi-volume-edfb6804-661a-4f54-bad5-1dbf64d9a9ba": Phase="Pending", Reason="", readiness=false. Elapsed: 3.904137ms Jun 3 00:43:20.938: INFO: Pod "downwardapi-volume-edfb6804-661a-4f54-bad5-1dbf64d9a9ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007773594s Jun 3 00:43:22.941: INFO: Pod "downwardapi-volume-edfb6804-661a-4f54-bad5-1dbf64d9a9ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011543209s STEP: Saw pod success Jun 3 00:43:22.941: INFO: Pod "downwardapi-volume-edfb6804-661a-4f54-bad5-1dbf64d9a9ba" satisfied condition "Succeeded or Failed" Jun 3 00:43:22.944: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-edfb6804-661a-4f54-bad5-1dbf64d9a9ba container client-container: STEP: delete the pod Jun 3 00:43:23.012: INFO: Waiting for pod downwardapi-volume-edfb6804-661a-4f54-bad5-1dbf64d9a9ba to disappear Jun 3 00:43:23.036: INFO: Pod downwardapi-volume-edfb6804-661a-4f54-bad5-1dbf64d9a9ba no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:43:23.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6222" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":237,"skipped":3701,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:43:23.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-ff5a19ac-0d3d-4d71-aa3b-4b3ff1936b31 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-ff5a19ac-0d3d-4d71-aa3b-4b3ff1936b31 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:43:29.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3875" for this suite. • [SLOW TEST:6.166 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":238,"skipped":3742,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:43:29.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7332 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 3 00:43:29.256: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 3 00:43:29.350: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 3 00:43:31.354: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 3 00:43:33.354: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:43:35.354: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:43:37.355: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:43:39.356: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:43:41.355: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:43:43.354: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:43:45.460: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:43:47.355: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 3 00:43:47.360: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 3 00:43:49.365: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 3 00:43:51.365: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 3 00:43:55.473: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.233:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7332 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 00:43:55.473: INFO: >>> kubeConfig: /root/.kube/config I0603 00:43:55.506101 7 log.go:172] (0xc002200bb0) (0xc001b64e60) Create stream I0603 00:43:55.506135 7 log.go:172] (0xc002200bb0) (0xc001b64e60) Stream added, broadcasting: 1 I0603 00:43:55.508355 7 log.go:172] (0xc002200bb0) Reply frame received for 1 I0603 00:43:55.508392 7 log.go:172] (0xc002200bb0) (0xc00201d040) Create stream I0603 00:43:55.508404 7 log.go:172] (0xc002200bb0) (0xc00201d040) Stream added, broadcasting: 3 I0603 00:43:55.509425 7 log.go:172] (0xc002200bb0) Reply frame received for 3 I0603 00:43:55.509459 7 log.go:172] (0xc002200bb0) (0xc00201d180) Create stream I0603 00:43:55.509472 7 log.go:172] (0xc002200bb0) (0xc00201d180) Stream added, broadcasting: 5 I0603 00:43:55.510524 7 log.go:172] (0xc002200bb0) Reply frame received for 5 I0603 00:43:55.593379 7 log.go:172] (0xc002200bb0) Data frame received for 5 I0603 00:43:55.593409 7 log.go:172] (0xc00201d180) (5) Data frame handling I0603 00:43:55.593435 7 log.go:172] (0xc002200bb0) Data frame received for 3 I0603 00:43:55.593452 7 log.go:172] (0xc00201d040) (3) Data frame handling I0603 00:43:55.593467 7 log.go:172] (0xc00201d040) (3) Data frame sent I0603 00:43:55.593483 7 log.go:172] (0xc002200bb0) Data frame received for 3 I0603 00:43:55.593510 7 log.go:172] (0xc00201d040) (3) Data frame handling I0603 00:43:55.595151 7 log.go:172] (0xc002200bb0) Data frame received for 1 I0603 00:43:55.595200 7 log.go:172] (0xc001b64e60) (1) Data frame handling I0603 00:43:55.595217 7 log.go:172] (0xc001b64e60) (1) Data frame sent I0603 00:43:55.595337 7 log.go:172] (0xc002200bb0) (0xc001b64e60) Stream removed, broadcasting: 1 I0603 00:43:55.595389 7 log.go:172] (0xc002200bb0) (0xc001b64e60) Stream removed, broadcasting: 1 I0603 00:43:55.595397 7 log.go:172] (0xc002200bb0) (0xc00201d040) Stream removed, broadcasting: 3 I0603 00:43:55.595476 7 log.go:172] (0xc002200bb0) Go away received I0603 00:43:55.595544 7 log.go:172] (0xc002200bb0) (0xc00201d180) Stream removed, broadcasting: 5 Jun 3 00:43:55.595: INFO: Found all expected endpoints: [netserver-0] Jun 3 00:43:55.600: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.179:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7332 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 00:43:55.600: INFO: >>> kubeConfig: /root/.kube/config I0603 00:43:55.626375 7 log.go:172] (0xc001afadc0) (0xc002095220) Create stream I0603 00:43:55.626399 7 log.go:172] (0xc001afadc0) (0xc002095220) Stream added, broadcasting: 1 I0603 00:43:55.628467 7 log.go:172] (0xc001afadc0) Reply frame received for 1 I0603 00:43:55.628502 7 log.go:172] (0xc001afadc0) (0xc0020952c0) Create stream I0603 00:43:55.628514 7 log.go:172] (0xc001afadc0) (0xc0020952c0) Stream added, broadcasting: 3 I0603 00:43:55.629702 7 log.go:172] (0xc001afadc0) Reply frame received for 3 I0603 00:43:55.629743 7 log.go:172] (0xc001afadc0) (0xc00201d4a0) Create stream I0603 00:43:55.629758 7 log.go:172] (0xc001afadc0) (0xc00201d4a0) Stream added, broadcasting: 5 I0603 00:43:55.630681 7 log.go:172] (0xc001afadc0) Reply frame received for 5 I0603 00:43:55.696312 7 log.go:172] (0xc001afadc0) Data frame received for 3 I0603 00:43:55.696350 7 log.go:172] (0xc0020952c0) (3) Data frame handling I0603 00:43:55.696488 7 log.go:172] (0xc0020952c0) (3) Data frame sent I0603 00:43:55.696663 7 log.go:172] (0xc001afadc0) Data frame received for 5 I0603 00:43:55.696691 7 log.go:172] (0xc00201d4a0) (5) Data frame handling I0603 00:43:55.696822 7 log.go:172] (0xc001afadc0) Data frame received for 3 I0603 00:43:55.696835 7 log.go:172] (0xc0020952c0) (3) Data frame handling I0603 00:43:55.698704 7 log.go:172] (0xc001afadc0) Data frame received for 1 I0603 00:43:55.698726 7 log.go:172] (0xc002095220) (1) Data frame handling I0603 00:43:55.698737 7 log.go:172] (0xc002095220) (1) Data frame sent I0603 00:43:55.698748 7 log.go:172] (0xc001afadc0) (0xc002095220) Stream removed, broadcasting: 1 I0603 00:43:55.698779 7 log.go:172] (0xc001afadc0) Go away received I0603 00:43:55.698890 7 log.go:172] (0xc001afadc0) (0xc002095220) Stream removed, broadcasting: 1 I0603 00:43:55.698915 7 log.go:172] (0xc001afadc0) (0xc0020952c0) Stream removed, broadcasting: 3 I0603 00:43:55.698934 7 log.go:172] (0xc001afadc0) (0xc00201d4a0) Stream removed, broadcasting: 5 Jun 3 00:43:55.698: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:43:55.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7332" for this suite. • [SLOW TEST:26.497 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":239,"skipped":3751,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:43:55.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:44:55.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7748" for this suite. • [SLOW TEST:60.104 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":240,"skipped":3772,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:44:55.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:46:55.925: INFO: Deleting pod "var-expansion-e5664442-4c44-4403-ad6d-f6b6978e26ad" in namespace "var-expansion-9633" Jun 3 00:46:55.930: INFO: Wait up to 5m0s for pod "var-expansion-e5664442-4c44-4403-ad6d-f6b6978e26ad" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:46:57.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9633" for this suite. • [SLOW TEST:122.134 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":241,"skipped":3826,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:46:57.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-j7hnz in namespace proxy-9886 I0603 00:46:58.066876 7 runners.go:190] Created replication controller with name: proxy-service-j7hnz, namespace: proxy-9886, replica count: 1 I0603 00:46:59.117248 7 runners.go:190] proxy-service-j7hnz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 00:47:00.117568 7 runners.go:190] proxy-service-j7hnz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 00:47:01.117766 7 runners.go:190] proxy-service-j7hnz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0603 00:47:02.118022 7 runners.go:190] proxy-service-j7hnz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0603 00:47:03.118235 7 runners.go:190] proxy-service-j7hnz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0603 00:47:04.118491 7 runners.go:190] proxy-service-j7hnz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0603 00:47:05.118770 7 runners.go:190] proxy-service-j7hnz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0603 00:47:06.119040 7 runners.go:190] proxy-service-j7hnz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0603 00:47:07.119278 7 runners.go:190] proxy-service-j7hnz Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 00:47:07.122: INFO: setup took 9.132059667s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 3 00:47:07.128: INFO: (0) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 5.674856ms) Jun 3 00:47:07.128: INFO: (0) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 6.079315ms) Jun 3 00:47:07.129: INFO: (0) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:1080/proxy/: test<... (200; 6.889315ms) Jun 3 00:47:07.129: INFO: (0) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 7.358246ms) Jun 3 00:47:07.134: INFO: (0) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname2/proxy/: bar (200; 11.969819ms) Jun 3 00:47:07.134: INFO: (0) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname1/proxy/: foo (200; 12.055902ms) Jun 3 00:47:07.140: INFO: (0) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 18.245604ms) Jun 3 00:47:07.140: INFO: (0) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname1/proxy/: foo (200; 18.21242ms) Jun 3 00:47:07.142: INFO: (0) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:443/proxy/: ... (200; 20.485531ms) Jun 3 00:47:07.143: INFO: (0) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq/proxy/: test (200; 20.93584ms) Jun 3 00:47:07.146: INFO: (0) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:460/proxy/: tls baz (200; 24.366212ms) Jun 3 00:47:07.146: INFO: (0) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname1/proxy/: tls baz (200; 24.344236ms) Jun 3 00:47:07.147: INFO: (0) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname2/proxy/: tls qux (200; 24.948716ms) Jun 3 00:47:07.147: INFO: (0) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:462/proxy/: tls qux (200; 24.972959ms) Jun 3 00:47:07.150: INFO: (1) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:1080/proxy/: ... (200; 3.419468ms) Jun 3 00:47:07.151: INFO: (1) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:460/proxy/: tls baz (200; 3.752578ms) Jun 3 00:47:07.152: INFO: (1) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname1/proxy/: foo (200; 5.064442ms) Jun 3 00:47:07.152: INFO: (1) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname2/proxy/: bar (200; 5.128245ms) Jun 3 00:47:07.152: INFO: (1) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname2/proxy/: bar (200; 5.212339ms) Jun 3 00:47:07.152: INFO: (1) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname1/proxy/: tls baz (200; 5.360887ms) Jun 3 00:47:07.153: INFO: (1) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:443/proxy/: test (200; 6.134336ms) Jun 3 00:47:07.153: INFO: (1) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 6.252529ms) Jun 3 00:47:07.153: INFO: (1) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname2/proxy/: tls qux (200; 6.209837ms) Jun 3 00:47:07.153: INFO: (1) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 6.206725ms) Jun 3 00:47:07.153: INFO: (1) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:1080/proxy/: test<... (200; 6.215169ms) Jun 3 00:47:07.153: INFO: (1) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:462/proxy/: tls qux (200; 6.235646ms) Jun 3 00:47:07.153: INFO: (1) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname1/proxy/: foo (200; 6.339803ms) Jun 3 00:47:07.154: INFO: (1) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 6.47506ms) Jun 3 00:47:07.154: INFO: (1) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 6.535161ms) Jun 3 00:47:07.158: INFO: (2) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq/proxy/: test (200; 4.766532ms) Jun 3 00:47:07.159: INFO: (2) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 4.798142ms) Jun 3 00:47:07.159: INFO: (2) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 4.840445ms) Jun 3 00:47:07.159: INFO: (2) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname2/proxy/: tls qux (200; 5.000867ms) Jun 3 00:47:07.159: INFO: (2) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 5.186302ms) Jun 3 00:47:07.159: INFO: (2) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname2/proxy/: bar (200; 5.300533ms) Jun 3 00:47:07.159: INFO: (2) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname1/proxy/: foo (200; 5.30887ms) Jun 3 00:47:07.159: INFO: (2) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:460/proxy/: tls baz (200; 5.377613ms) Jun 3 00:47:07.159: INFO: (2) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname2/proxy/: bar (200; 5.560038ms) Jun 3 00:47:07.159: INFO: (2) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:443/proxy/: ... (200; 5.705256ms) Jun 3 00:47:07.159: INFO: (2) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:462/proxy/: tls qux (200; 5.805639ms) Jun 3 00:47:07.160: INFO: (2) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 6.12076ms) Jun 3 00:47:07.160: INFO: (2) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:1080/proxy/: test<... (200; 6.173075ms) Jun 3 00:47:07.160: INFO: (2) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname1/proxy/: tls baz (200; 6.116529ms) Jun 3 00:47:07.160: INFO: (2) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname1/proxy/: foo (200; 6.098744ms) Jun 3 00:47:07.163: INFO: (3) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:443/proxy/: test<... (200; 4.447854ms) Jun 3 00:47:07.165: INFO: (3) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 4.490605ms) Jun 3 00:47:07.165: INFO: (3) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname2/proxy/: tls qux (200; 4.611611ms) Jun 3 00:47:07.165: INFO: (3) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:1080/proxy/: ... (200; 4.527224ms) Jun 3 00:47:07.165: INFO: (3) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname1/proxy/: foo (200; 4.610874ms) Jun 3 00:47:07.165: INFO: (3) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq/proxy/: test (200; 4.690613ms) Jun 3 00:47:07.165: INFO: (3) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname1/proxy/: foo (200; 5.276052ms) Jun 3 00:47:07.166: INFO: (3) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname2/proxy/: bar (200; 5.581936ms) Jun 3 00:47:07.166: INFO: (3) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname1/proxy/: tls baz (200; 5.829703ms) Jun 3 00:47:07.170: INFO: (4) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname1/proxy/: tls baz (200; 4.208799ms) Jun 3 00:47:07.170: INFO: (4) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname2/proxy/: bar (200; 4.239576ms) Jun 3 00:47:07.170: INFO: (4) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname1/proxy/: foo (200; 4.439825ms) Jun 3 00:47:07.171: INFO: (4) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 4.825603ms) Jun 3 00:47:07.171: INFO: (4) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname2/proxy/: bar (200; 4.914429ms) Jun 3 00:47:07.171: INFO: (4) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname1/proxy/: foo (200; 4.854479ms) Jun 3 00:47:07.171: INFO: (4) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname2/proxy/: tls qux (200; 4.893855ms) Jun 3 00:47:07.171: INFO: (4) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq/proxy/: test (200; 5.201249ms) Jun 3 00:47:07.172: INFO: (4) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:460/proxy/: tls baz (200; 5.626301ms) Jun 3 00:47:07.172: INFO: (4) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 5.750402ms) Jun 3 00:47:07.172: INFO: (4) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 5.807844ms) Jun 3 00:47:07.172: INFO: (4) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 5.862407ms) Jun 3 00:47:07.172: INFO: (4) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:1080/proxy/: ... (200; 6.028428ms) Jun 3 00:47:07.172: INFO: (4) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:443/proxy/: test<... (200; 6.003652ms) Jun 3 00:47:07.172: INFO: (4) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:462/proxy/: tls qux (200; 6.060211ms) Jun 3 00:47:07.175: INFO: (5) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:1080/proxy/: ... (200; 3.222834ms) Jun 3 00:47:07.175: INFO: (5) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 2.912719ms) Jun 3 00:47:07.176: INFO: (5) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 3.011845ms) Jun 3 00:47:07.176: INFO: (5) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:1080/proxy/: test<... (200; 3.430497ms) Jun 3 00:47:07.176: INFO: (5) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 3.338224ms) Jun 3 00:47:07.176: INFO: (5) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq/proxy/: test (200; 3.486848ms) Jun 3 00:47:07.176: INFO: (5) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:460/proxy/: tls baz (200; 3.611161ms) Jun 3 00:47:07.177: INFO: (5) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 3.703335ms) Jun 3 00:47:07.178: INFO: (5) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname1/proxy/: foo (200; 5.139112ms) Jun 3 00:47:07.178: INFO: (5) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname1/proxy/: tls baz (200; 4.522545ms) Jun 3 00:47:07.178: INFO: (5) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname2/proxy/: bar (200; 4.631371ms) Jun 3 00:47:07.178: INFO: (5) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:462/proxy/: tls qux (200; 4.514225ms) Jun 3 00:47:07.178: INFO: (5) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:443/proxy/: test (200; 7.156709ms) Jun 3 00:47:07.186: INFO: (6) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 7.424402ms) Jun 3 00:47:07.186: INFO: (6) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 7.442792ms) Jun 3 00:47:07.186: INFO: (6) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:1080/proxy/: ... (200; 7.59046ms) Jun 3 00:47:07.186: INFO: (6) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:443/proxy/: test<... (200; 8.54103ms) Jun 3 00:47:07.187: INFO: (6) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname1/proxy/: tls baz (200; 8.645576ms) Jun 3 00:47:07.188: INFO: (6) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname1/proxy/: foo (200; 8.840061ms) Jun 3 00:47:07.191: INFO: (7) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:460/proxy/: tls baz (200; 3.416461ms) Jun 3 00:47:07.195: INFO: (7) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:443/proxy/: test (200; 7.366731ms) Jun 3 00:47:07.195: INFO: (7) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:1080/proxy/: ... (200; 7.328429ms) Jun 3 00:47:07.196: INFO: (7) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:1080/proxy/: test<... (200; 8.307989ms) Jun 3 00:47:07.196: INFO: (7) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 8.447781ms) Jun 3 00:47:07.196: INFO: (7) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:462/proxy/: tls qux (200; 8.500534ms) Jun 3 00:47:07.196: INFO: (7) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname1/proxy/: tls baz (200; 8.554699ms) Jun 3 00:47:07.197: INFO: (7) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname2/proxy/: bar (200; 8.806394ms) Jun 3 00:47:07.197: INFO: (7) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 8.874224ms) Jun 3 00:47:07.197: INFO: (7) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname1/proxy/: foo (200; 8.849755ms) Jun 3 00:47:07.197: INFO: (7) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname2/proxy/: bar (200; 8.950625ms) Jun 3 00:47:07.197: INFO: (7) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 9.296607ms) Jun 3 00:47:07.198: INFO: (7) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname2/proxy/: tls qux (200; 9.847573ms) Jun 3 00:47:07.198: INFO: (7) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname1/proxy/: foo (200; 9.889439ms) Jun 3 00:47:07.208: INFO: (8) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:1080/proxy/: test<... (200; 10.75447ms) Jun 3 00:47:07.208: INFO: (8) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 10.731944ms) Jun 3 00:47:07.209: INFO: (8) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:1080/proxy/: ... (200; 11.140055ms) Jun 3 00:47:07.209: INFO: (8) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname1/proxy/: foo (200; 11.121636ms) Jun 3 00:47:07.209: INFO: (8) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:462/proxy/: tls qux (200; 11.152192ms) Jun 3 00:47:07.209: INFO: (8) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 11.330009ms) Jun 3 00:47:07.209: INFO: (8) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq/proxy/: test (200; 11.370021ms) Jun 3 00:47:07.210: INFO: (8) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 12.742184ms) Jun 3 00:47:07.211: INFO: (8) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname2/proxy/: bar (200; 12.987428ms) Jun 3 00:47:07.211: INFO: (8) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:443/proxy/: test<... (200; 7.67394ms) Jun 3 00:47:07.220: INFO: (9) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname1/proxy/: tls baz (200; 8.153538ms) Jun 3 00:47:07.220: INFO: (9) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname2/proxy/: bar (200; 8.091738ms) Jun 3 00:47:07.220: INFO: (9) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq/proxy/: test (200; 8.457015ms) Jun 3 00:47:07.220: INFO: (9) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname1/proxy/: foo (200; 8.411449ms) Jun 3 00:47:07.220: INFO: (9) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:1080/proxy/: ... (200; 8.813972ms) Jun 3 00:47:07.220: INFO: (9) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname2/proxy/: tls qux (200; 8.556942ms) Jun 3 00:47:07.220: INFO: (9) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 8.709759ms) Jun 3 00:47:07.220: INFO: (9) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 8.848294ms) Jun 3 00:47:07.225: INFO: (10) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:1080/proxy/: test<... (200; 4.560334ms) Jun 3 00:47:07.225: INFO: (10) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:443/proxy/: ... (200; 4.768285ms) Jun 3 00:47:07.225: INFO: (10) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 4.921002ms) Jun 3 00:47:07.225: INFO: (10) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 4.984526ms) Jun 3 00:47:07.225: INFO: (10) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 4.922165ms) Jun 3 00:47:07.225: INFO: (10) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 4.946943ms) Jun 3 00:47:07.225: INFO: (10) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:462/proxy/: tls qux (200; 5.216309ms) Jun 3 00:47:07.226: INFO: (10) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:460/proxy/: tls baz (200; 5.608953ms) Jun 3 00:47:07.226: INFO: (10) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname2/proxy/: bar (200; 5.787433ms) Jun 3 00:47:07.226: INFO: (10) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq/proxy/: test (200; 5.917928ms) Jun 3 00:47:07.227: INFO: (10) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname1/proxy/: foo (200; 6.379942ms) Jun 3 00:47:07.227: INFO: (10) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname1/proxy/: foo (200; 6.529744ms) Jun 3 00:47:07.227: INFO: (10) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname2/proxy/: tls qux (200; 6.570244ms) Jun 3 00:47:07.227: INFO: (10) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname1/proxy/: tls baz (200; 6.544249ms) Jun 3 00:47:07.227: INFO: (10) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname2/proxy/: bar (200; 6.582092ms) Jun 3 00:47:07.231: INFO: (11) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 3.664652ms) Jun 3 00:47:07.231: INFO: (11) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:1080/proxy/: test<... (200; 3.747915ms) Jun 3 00:47:07.231: INFO: (11) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname2/proxy/: tls qux (200; 3.834589ms) Jun 3 00:47:07.231: INFO: (11) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq/proxy/: test (200; 3.83071ms) Jun 3 00:47:07.231: INFO: (11) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:1080/proxy/: ... (200; 4.125631ms) Jun 3 00:47:07.231: INFO: (11) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 4.154706ms) Jun 3 00:47:07.231: INFO: (11) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:460/proxy/: tls baz (200; 4.331991ms) Jun 3 00:47:07.231: INFO: (11) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 4.286028ms) Jun 3 00:47:07.231: INFO: (11) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 4.305317ms) Jun 3 00:47:07.231: INFO: (11) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:462/proxy/: tls qux (200; 4.498956ms) Jun 3 00:47:07.231: INFO: (11) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:443/proxy/: test (200; 5.848159ms) Jun 3 00:47:07.239: INFO: (12) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 5.863614ms) Jun 3 00:47:07.239: INFO: (12) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:462/proxy/: tls qux (200; 5.899529ms) Jun 3 00:47:07.239: INFO: (12) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:1080/proxy/: ... (200; 5.890739ms) Jun 3 00:47:07.239: INFO: (12) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname1/proxy/: foo (200; 5.882182ms) Jun 3 00:47:07.239: INFO: (12) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 5.984212ms) Jun 3 00:47:07.239: INFO: (12) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname2/proxy/: bar (200; 5.977771ms) Jun 3 00:47:07.239: INFO: (12) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:1080/proxy/: test<... (200; 5.947898ms) Jun 3 00:47:07.239: INFO: (12) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname1/proxy/: foo (200; 6.054169ms) Jun 3 00:47:07.239: INFO: (12) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname2/proxy/: bar (200; 6.156598ms) Jun 3 00:47:07.239: INFO: (12) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname1/proxy/: tls baz (200; 6.078324ms) Jun 3 00:47:07.239: INFO: (12) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:443/proxy/: test (200; 4.440025ms) Jun 3 00:47:07.245: INFO: (13) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 5.142841ms) Jun 3 00:47:07.245: INFO: (13) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:1080/proxy/: ... (200; 5.002352ms) Jun 3 00:47:07.245: INFO: (13) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname2/proxy/: bar (200; 5.196696ms) Jun 3 00:47:07.245: INFO: (13) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:1080/proxy/: test<... (200; 5.525594ms) Jun 3 00:47:07.245: INFO: (13) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname2/proxy/: tls qux (200; 5.778371ms) Jun 3 00:47:07.245: INFO: (13) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname2/proxy/: bar (200; 5.982061ms) Jun 3 00:47:07.245: INFO: (13) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:462/proxy/: tls qux (200; 5.743267ms) Jun 3 00:47:07.246: INFO: (13) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:460/proxy/: tls baz (200; 5.743243ms) Jun 3 00:47:07.246: INFO: (13) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname1/proxy/: foo (200; 5.89309ms) Jun 3 00:47:07.246: INFO: (13) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname1/proxy/: foo (200; 6.496267ms) Jun 3 00:47:07.246: INFO: (13) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname1/proxy/: tls baz (200; 5.802035ms) Jun 3 00:47:07.246: INFO: (13) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:443/proxy/: test<... (200; 3.565ms) Jun 3 00:47:07.250: INFO: (14) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:1080/proxy/: ... (200; 3.691863ms) Jun 3 00:47:07.250: INFO: (14) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:460/proxy/: tls baz (200; 3.72788ms) Jun 3 00:47:07.250: INFO: (14) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq/proxy/: test (200; 3.674224ms) Jun 3 00:47:07.250: INFO: (14) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 3.619969ms) Jun 3 00:47:07.250: INFO: (14) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:462/proxy/: tls qux (200; 3.93663ms) Jun 3 00:47:07.250: INFO: (14) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 3.997539ms) Jun 3 00:47:07.250: INFO: (14) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname1/proxy/: foo (200; 4.061782ms) Jun 3 00:47:07.250: INFO: (14) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname2/proxy/: bar (200; 4.103964ms) Jun 3 00:47:07.251: INFO: (14) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname1/proxy/: tls baz (200; 4.486884ms) Jun 3 00:47:07.251: INFO: (14) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname2/proxy/: tls qux (200; 4.742808ms) Jun 3 00:47:07.251: INFO: (14) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname1/proxy/: foo (200; 4.800612ms) Jun 3 00:47:07.251: INFO: (14) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname2/proxy/: bar (200; 4.939796ms) Jun 3 00:47:07.251: INFO: (14) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:443/proxy/: test<... (200; 3.245154ms) Jun 3 00:47:07.255: INFO: (15) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:460/proxy/: tls baz (200; 3.300571ms) Jun 3 00:47:07.255: INFO: (15) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:1080/proxy/: ... (200; 3.31018ms) Jun 3 00:47:07.255: INFO: (15) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:462/proxy/: tls qux (200; 3.501956ms) Jun 3 00:47:07.255: INFO: (15) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq/proxy/: test (200; 3.56078ms) Jun 3 00:47:07.255: INFO: (15) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 3.626011ms) Jun 3 00:47:07.255: INFO: (15) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname1/proxy/: tls baz (200; 3.911444ms) Jun 3 00:47:07.256: INFO: (15) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname2/proxy/: bar (200; 4.00593ms) Jun 3 00:47:07.256: INFO: (15) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname1/proxy/: foo (200; 4.311601ms) Jun 3 00:47:07.256: INFO: (15) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname1/proxy/: foo (200; 4.25419ms) Jun 3 00:47:07.256: INFO: (15) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname2/proxy/: bar (200; 4.360297ms) Jun 3 00:47:07.256: INFO: (15) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname2/proxy/: tls qux (200; 4.404797ms) Jun 3 00:47:07.259: INFO: (16) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 2.794498ms) Jun 3 00:47:07.259: INFO: (16) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:462/proxy/: tls qux (200; 2.860586ms) Jun 3 00:47:07.259: INFO: (16) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:1080/proxy/: test<... (200; 2.977721ms) Jun 3 00:47:07.259: INFO: (16) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq/proxy/: test (200; 3.30202ms) Jun 3 00:47:07.259: INFO: (16) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:460/proxy/: tls baz (200; 3.332459ms) Jun 3 00:47:07.259: INFO: (16) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 3.495793ms) Jun 3 00:47:07.259: INFO: (16) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname2/proxy/: bar (200; 3.476656ms) Jun 3 00:47:07.260: INFO: (16) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname1/proxy/: foo (200; 3.572895ms) Jun 3 00:47:07.260: INFO: (16) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname1/proxy/: tls baz (200; 3.702299ms) Jun 3 00:47:07.260: INFO: (16) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname2/proxy/: tls qux (200; 3.782632ms) Jun 3 00:47:07.260: INFO: (16) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname2/proxy/: bar (200; 3.843781ms) Jun 3 00:47:07.260: INFO: (16) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:1080/proxy/: ... (200; 3.938553ms) Jun 3 00:47:07.260: INFO: (16) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname1/proxy/: foo (200; 3.972594ms) Jun 3 00:47:07.260: INFO: (16) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 3.939643ms) Jun 3 00:47:07.260: INFO: (16) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 3.956761ms) Jun 3 00:47:07.260: INFO: (16) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:443/proxy/: test (200; 3.160577ms) Jun 3 00:47:07.263: INFO: (17) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:1080/proxy/: test<... (200; 3.161447ms) Jun 3 00:47:07.263: INFO: (17) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:1080/proxy/: ... (200; 3.223866ms) Jun 3 00:47:07.263: INFO: (17) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:443/proxy/: test (200; 3.819714ms) Jun 3 00:47:07.274: INFO: (18) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 3.812089ms) Jun 3 00:47:07.274: INFO: (18) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:460/proxy/: tls baz (200; 3.975597ms) Jun 3 00:47:07.275: INFO: (18) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:462/proxy/: tls qux (200; 4.570035ms) Jun 3 00:47:07.275: INFO: (18) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:443/proxy/: ... (200; 4.875313ms) Jun 3 00:47:07.275: INFO: (18) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:1080/proxy/: test<... (200; 4.927022ms) Jun 3 00:47:07.275: INFO: (18) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname1/proxy/: tls baz (200; 4.888343ms) Jun 3 00:47:07.275: INFO: (18) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname1/proxy/: foo (200; 4.938861ms) Jun 3 00:47:07.275: INFO: (18) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname1/proxy/: foo (200; 4.853245ms) Jun 3 00:47:07.275: INFO: (18) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 4.97766ms) Jun 3 00:47:07.275: INFO: (18) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname2/proxy/: bar (200; 5.203182ms) Jun 3 00:47:07.278: INFO: (19) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 2.6717ms) Jun 3 00:47:07.278: INFO: (19) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:162/proxy/: bar (200; 2.75601ms) Jun 3 00:47:07.280: INFO: (19) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 4.207653ms) Jun 3 00:47:07.280: INFO: (19) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:1080/proxy/: ... (200; 4.287458ms) Jun 3 00:47:07.280: INFO: (19) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:443/proxy/: test<... (200; 4.174373ms) Jun 3 00:47:07.280: INFO: (19) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:460/proxy/: tls baz (200; 4.16204ms) Jun 3 00:47:07.280: INFO: (19) /api/v1/namespaces/proxy-9886/pods/https:proxy-service-j7hnz-hxlvq:462/proxy/: tls qux (200; 4.411529ms) Jun 3 00:47:07.280: INFO: (19) /api/v1/namespaces/proxy-9886/pods/http:proxy-service-j7hnz-hxlvq:160/proxy/: foo (200; 4.482238ms) Jun 3 00:47:07.280: INFO: (19) /api/v1/namespaces/proxy-9886/pods/proxy-service-j7hnz-hxlvq/proxy/: test (200; 4.212949ms) Jun 3 00:47:07.280: INFO: (19) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname1/proxy/: foo (200; 4.422686ms) Jun 3 00:47:07.280: INFO: (19) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname1/proxy/: foo (200; 4.444825ms) Jun 3 00:47:07.280: INFO: (19) /api/v1/namespaces/proxy-9886/services/proxy-service-j7hnz:portname2/proxy/: bar (200; 4.821939ms) Jun 3 00:47:07.280: INFO: (19) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname1/proxy/: tls baz (200; 4.460911ms) Jun 3 00:47:07.280: INFO: (19) /api/v1/namespaces/proxy-9886/services/http:proxy-service-j7hnz:portname2/proxy/: bar (200; 4.779514ms) Jun 3 00:47:07.280: INFO: (19) /api/v1/namespaces/proxy-9886/services/https:proxy-service-j7hnz:tlsportname2/proxy/: tls qux (200; 4.861451ms) STEP: deleting ReplicationController proxy-service-j7hnz in namespace proxy-9886, will wait for the garbage collector to delete the pods Jun 3 00:47:07.339: INFO: Deleting ReplicationController proxy-service-j7hnz took: 7.000342ms Jun 3 00:47:07.639: INFO: Terminating ReplicationController proxy-service-j7hnz pods took: 300.192736ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:47:14.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9886" for this suite. • [SLOW TEST:17.011 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":288,"completed":242,"skipped":3843,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:47:14.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:47:15.005: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:47:19.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3864" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":243,"skipped":3852,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:47:19.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-7ec7e30b-145f-46be-8b97-2acdd08de4dc STEP: Creating a pod to test consume configMaps Jun 3 00:47:19.170: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-54ba4487-63c9-4145-8023-0c62c3973f4d" in namespace "projected-8681" to be "Succeeded or Failed" Jun 3 00:47:19.176: INFO: Pod "pod-projected-configmaps-54ba4487-63c9-4145-8023-0c62c3973f4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434739ms Jun 3 00:47:21.180: INFO: Pod "pod-projected-configmaps-54ba4487-63c9-4145-8023-0c62c3973f4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01078397s Jun 3 00:47:23.204: INFO: Pod "pod-projected-configmaps-54ba4487-63c9-4145-8023-0c62c3973f4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034064769s STEP: Saw pod success Jun 3 00:47:23.204: INFO: Pod "pod-projected-configmaps-54ba4487-63c9-4145-8023-0c62c3973f4d" satisfied condition "Succeeded or Failed" Jun 3 00:47:23.208: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-54ba4487-63c9-4145-8023-0c62c3973f4d container projected-configmap-volume-test: STEP: delete the pod Jun 3 00:47:23.258: INFO: Waiting for pod pod-projected-configmaps-54ba4487-63c9-4145-8023-0c62c3973f4d to disappear Jun 3 00:47:23.266: INFO: Pod pod-projected-configmaps-54ba4487-63c9-4145-8023-0c62c3973f4d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:47:23.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8681" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":244,"skipped":3854,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:47:23.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-d3cbefb3-e4be-40e6-a8b8-7ac4028c92d7 STEP: Creating a pod to test consume configMaps Jun 3 00:47:23.360: INFO: Waiting up to 5m0s for pod "pod-configmaps-6dcf6d28-644b-4c26-b6b7-283a263b64f5" in namespace "configmap-2572" to be "Succeeded or Failed" Jun 3 00:47:23.411: INFO: Pod "pod-configmaps-6dcf6d28-644b-4c26-b6b7-283a263b64f5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.981221ms Jun 3 00:47:25.415: INFO: Pod "pod-configmaps-6dcf6d28-644b-4c26-b6b7-283a263b64f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054774042s Jun 3 00:47:27.419: INFO: Pod "pod-configmaps-6dcf6d28-644b-4c26-b6b7-283a263b64f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058675935s STEP: Saw pod success Jun 3 00:47:27.419: INFO: Pod "pod-configmaps-6dcf6d28-644b-4c26-b6b7-283a263b64f5" satisfied condition "Succeeded or Failed" Jun 3 00:47:27.422: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-6dcf6d28-644b-4c26-b6b7-283a263b64f5 container configmap-volume-test: STEP: delete the pod Jun 3 00:47:27.455: INFO: Waiting for pod pod-configmaps-6dcf6d28-644b-4c26-b6b7-283a263b64f5 to disappear Jun 3 00:47:27.467: INFO: Pod pod-configmaps-6dcf6d28-644b-4c26-b6b7-283a263b64f5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:47:27.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2572" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":245,"skipped":3855,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:47:27.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-dc3d766b-9f3b-453e-b9e1-4515191038fb STEP: Creating secret with name s-test-opt-upd-56fe7e78-e455-4fde-8e02-0a8fb3b951aa STEP: Creating the pod STEP: Deleting secret s-test-opt-del-dc3d766b-9f3b-453e-b9e1-4515191038fb STEP: Updating secret s-test-opt-upd-56fe7e78-e455-4fde-8e02-0a8fb3b951aa STEP: Creating secret with name s-test-opt-create-aed7514f-52d1-44e3-b387-8254b55e6fa0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:48:40.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2702" for this suite. • [SLOW TEST:72.588 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":246,"skipped":3885,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:48:40.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 3 00:48:40.160: INFO: Waiting up to 5m0s for pod "pod-a4db0891-c5a2-49d8-a550-76bb57047330" in namespace "emptydir-7207" to be "Succeeded or Failed" Jun 3 00:48:40.181: INFO: Pod "pod-a4db0891-c5a2-49d8-a550-76bb57047330": Phase="Pending", Reason="", readiness=false. Elapsed: 21.09642ms Jun 3 00:48:42.188: INFO: Pod "pod-a4db0891-c5a2-49d8-a550-76bb57047330": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02779252s Jun 3 00:48:44.192: INFO: Pod "pod-a4db0891-c5a2-49d8-a550-76bb57047330": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03153922s STEP: Saw pod success Jun 3 00:48:44.192: INFO: Pod "pod-a4db0891-c5a2-49d8-a550-76bb57047330" satisfied condition "Succeeded or Failed" Jun 3 00:48:44.194: INFO: Trying to get logs from node latest-worker pod pod-a4db0891-c5a2-49d8-a550-76bb57047330 container test-container: STEP: delete the pod Jun 3 00:48:44.299: INFO: Waiting for pod pod-a4db0891-c5a2-49d8-a550-76bb57047330 to disappear Jun 3 00:48:44.307: INFO: Pod pod-a4db0891-c5a2-49d8-a550-76bb57047330 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:48:44.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7207" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":247,"skipped":3894,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:48:44.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 3 00:48:44.420: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0c04010c-6e5e-45be-ba2e-3b0e02d26cfd" in namespace "downward-api-1174" to be "Succeeded or Failed" Jun 3 00:48:44.423: INFO: Pod "downwardapi-volume-0c04010c-6e5e-45be-ba2e-3b0e02d26cfd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.073183ms Jun 3 00:48:46.462: INFO: Pod "downwardapi-volume-0c04010c-6e5e-45be-ba2e-3b0e02d26cfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042032721s Jun 3 00:48:48.505: INFO: Pod "downwardapi-volume-0c04010c-6e5e-45be-ba2e-3b0e02d26cfd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084369668s Jun 3 00:48:50.507: INFO: Pod "downwardapi-volume-0c04010c-6e5e-45be-ba2e-3b0e02d26cfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087109905s STEP: Saw pod success Jun 3 00:48:50.507: INFO: Pod "downwardapi-volume-0c04010c-6e5e-45be-ba2e-3b0e02d26cfd" satisfied condition "Succeeded or Failed" Jun 3 00:48:50.510: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-0c04010c-6e5e-45be-ba2e-3b0e02d26cfd container client-container: STEP: delete the pod Jun 3 00:48:50.540: INFO: Waiting for pod downwardapi-volume-0c04010c-6e5e-45be-ba2e-3b0e02d26cfd to disappear Jun 3 00:48:50.593: INFO: Pod downwardapi-volume-0c04010c-6e5e-45be-ba2e-3b0e02d26cfd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:48:50.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1174" for this suite. • [SLOW TEST:6.322 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":248,"skipped":3920,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:48:50.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:48:50.722: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:48:51.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9296" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":249,"skipped":3924,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:48:51.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 00:48:51.824: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 00:48:53.836: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726742131, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726742131, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726742131, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726742131, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 00:48:56.859: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:48:56.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9723" for this suite. STEP: Destroying namespace "webhook-9723-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.792 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":250,"skipped":3933,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:48:57.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jun 3 00:48:57.232: INFO: >>> kubeConfig: /root/.kube/config Jun 3 00:48:59.170: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:49:09.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7618" for this suite. • [SLOW TEST:12.679 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":251,"skipped":3941,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:49:09.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0603 00:49:50.846961 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 3 00:49:50.847: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:49:50.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6442" for this suite. • [SLOW TEST:41.045 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":252,"skipped":3957,"failed":0} [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:49:50.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:50:07.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8081" for this suite. • [SLOW TEST:16.458 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":253,"skipped":3957,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:50:07.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Jun 3 00:50:07.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' Jun 3 00:50:07.627: INFO: stderr: "" Jun 3 00:50:07.627: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:50:07.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5296" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":254,"skipped":3965,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:50:07.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:50:11.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8581" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":255,"skipped":3994,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:50:11.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-055265aa-893b-4acd-ab54-224274b321bf STEP: Creating configMap with name cm-test-opt-upd-3d29c138-4f7f-4224-8f37-defc590b2180 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-055265aa-893b-4acd-ab54-224274b321bf STEP: Updating configmap cm-test-opt-upd-3d29c138-4f7f-4224-8f37-defc590b2180 STEP: Creating configMap with name cm-test-opt-create-cfcdd63f-052f-497d-8570-473f3a7f6b2e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:50:20.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3556" for this suite. • [SLOW TEST:8.346 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":256,"skipped":4092,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:50:20.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jun 3 00:50:20.235: INFO: Waiting up to 5m0s for pod "downward-api-a26bc4cc-a093-4f4f-bdb3-7e60412b9446" in namespace "downward-api-5430" to be "Succeeded or Failed" Jun 3 00:50:20.252: INFO: Pod "downward-api-a26bc4cc-a093-4f4f-bdb3-7e60412b9446": Phase="Pending", Reason="", readiness=false. Elapsed: 16.894578ms Jun 3 00:50:22.256: INFO: Pod "downward-api-a26bc4cc-a093-4f4f-bdb3-7e60412b9446": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021397317s Jun 3 00:50:24.260: INFO: Pod "downward-api-a26bc4cc-a093-4f4f-bdb3-7e60412b9446": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025399863s STEP: Saw pod success Jun 3 00:50:24.260: INFO: Pod "downward-api-a26bc4cc-a093-4f4f-bdb3-7e60412b9446" satisfied condition "Succeeded or Failed" Jun 3 00:50:24.263: INFO: Trying to get logs from node latest-worker pod downward-api-a26bc4cc-a093-4f4f-bdb3-7e60412b9446 container dapi-container: STEP: delete the pod Jun 3 00:50:24.300: INFO: Waiting for pod downward-api-a26bc4cc-a093-4f4f-bdb3-7e60412b9446 to disappear Jun 3 00:50:24.373: INFO: Pod downward-api-a26bc4cc-a093-4f4f-bdb3-7e60412b9446 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:50:24.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5430" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":257,"skipped":4110,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:50:24.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9675 STEP: creating service affinity-nodeport-transition in namespace services-9675 STEP: creating replication controller affinity-nodeport-transition in namespace services-9675 I0603 00:50:24.533757 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-9675, replica count: 3 I0603 00:50:27.584170 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 00:50:30.584408 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 00:50:30.594: INFO: Creating new exec pod Jun 3 00:50:35.635: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9675 execpod-affinityckxjq -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Jun 3 00:50:35.847: INFO: stderr: "I0603 00:50:35.779262 3248 log.go:172] (0xc0009f1290) (0xc000639540) Create stream\nI0603 00:50:35.779310 3248 log.go:172] (0xc0009f1290) (0xc000639540) Stream added, broadcasting: 1\nI0603 00:50:35.782716 3248 log.go:172] (0xc0009f1290) Reply frame received for 1\nI0603 00:50:35.782763 3248 log.go:172] (0xc0009f1290) (0xc0004d65a0) Create stream\nI0603 00:50:35.782779 3248 log.go:172] (0xc0009f1290) (0xc0004d65a0) Stream added, broadcasting: 3\nI0603 00:50:35.783725 3248 log.go:172] (0xc0009f1290) Reply frame received for 3\nI0603 00:50:35.783768 3248 log.go:172] (0xc0009f1290) (0xc0004d6aa0) Create stream\nI0603 00:50:35.783783 3248 log.go:172] (0xc0009f1290) (0xc0004d6aa0) Stream added, broadcasting: 5\nI0603 00:50:35.784731 3248 log.go:172] (0xc0009f1290) Reply frame received for 5\nI0603 00:50:35.840336 3248 log.go:172] (0xc0009f1290) Data frame received for 3\nI0603 00:50:35.840379 3248 log.go:172] (0xc0004d65a0) (3) Data frame handling\nI0603 00:50:35.840401 3248 log.go:172] (0xc0009f1290) Data frame received for 5\nI0603 00:50:35.840411 3248 log.go:172] (0xc0004d6aa0) (5) Data frame handling\nI0603 00:50:35.840424 3248 log.go:172] (0xc0004d6aa0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0603 00:50:35.840765 3248 log.go:172] (0xc0009f1290) Data frame received for 5\nI0603 00:50:35.840787 3248 log.go:172] (0xc0004d6aa0) (5) Data frame handling\nI0603 00:50:35.842291 3248 log.go:172] (0xc0009f1290) Data frame received for 1\nI0603 00:50:35.842325 3248 log.go:172] (0xc000639540) (1) Data frame handling\nI0603 00:50:35.842339 3248 log.go:172] (0xc000639540) (1) Data frame sent\nI0603 00:50:35.842354 3248 log.go:172] (0xc0009f1290) (0xc000639540) Stream removed, broadcasting: 1\nI0603 00:50:35.842391 3248 log.go:172] (0xc0009f1290) Go away received\nI0603 00:50:35.842714 3248 log.go:172] (0xc0009f1290) (0xc000639540) Stream removed, broadcasting: 1\nI0603 00:50:35.842743 3248 log.go:172] (0xc0009f1290) (0xc0004d65a0) Stream removed, broadcasting: 3\nI0603 00:50:35.842770 3248 log.go:172] (0xc0009f1290) (0xc0004d6aa0) Stream removed, broadcasting: 5\n" Jun 3 00:50:35.847: INFO: stdout: "" Jun 3 00:50:35.848: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9675 execpod-affinityckxjq -- /bin/sh -x -c nc -zv -t -w 2 10.102.204.52 80' Jun 3 00:50:36.056: INFO: stderr: "I0603 00:50:35.981370 3268 log.go:172] (0xc000a78fd0) (0xc000701ea0) Create stream\nI0603 00:50:35.981420 3268 log.go:172] (0xc000a78fd0) (0xc000701ea0) Stream added, broadcasting: 1\nI0603 00:50:35.984185 3268 log.go:172] (0xc000a78fd0) Reply frame received for 1\nI0603 00:50:35.984230 3268 log.go:172] (0xc000a78fd0) (0xc000710e60) Create stream\nI0603 00:50:35.984246 3268 log.go:172] (0xc000a78fd0) (0xc000710e60) Stream added, broadcasting: 3\nI0603 00:50:35.985573 3268 log.go:172] (0xc000a78fd0) Reply frame received for 3\nI0603 00:50:35.985610 3268 log.go:172] (0xc000a78fd0) (0xc000b500a0) Create stream\nI0603 00:50:35.985622 3268 log.go:172] (0xc000a78fd0) (0xc000b500a0) Stream added, broadcasting: 5\nI0603 00:50:35.986717 3268 log.go:172] (0xc000a78fd0) Reply frame received for 5\nI0603 00:50:36.049045 3268 log.go:172] (0xc000a78fd0) Data frame received for 3\nI0603 00:50:36.049091 3268 log.go:172] (0xc000710e60) (3) Data frame handling\nI0603 00:50:36.049247 3268 log.go:172] (0xc000a78fd0) Data frame received for 5\nI0603 00:50:36.049272 3268 log.go:172] (0xc000b500a0) (5) Data frame handling\nI0603 00:50:36.049356 3268 log.go:172] (0xc000b500a0) (5) Data frame sent\nI0603 00:50:36.049372 3268 log.go:172] (0xc000a78fd0) Data frame received for 5\n+ nc -zv -t -w 2 10.102.204.52 80\nConnection to 10.102.204.52 80 port [tcp/http] succeeded!\nI0603 00:50:36.049378 3268 log.go:172] (0xc000b500a0) (5) Data frame handling\nI0603 00:50:36.050906 3268 log.go:172] (0xc000a78fd0) Data frame received for 1\nI0603 00:50:36.050922 3268 log.go:172] (0xc000701ea0) (1) Data frame handling\nI0603 00:50:36.050938 3268 log.go:172] (0xc000701ea0) (1) Data frame sent\nI0603 00:50:36.051028 3268 log.go:172] (0xc000a78fd0) (0xc000701ea0) Stream removed, broadcasting: 1\nI0603 00:50:36.051088 3268 log.go:172] (0xc000a78fd0) Go away received\nI0603 00:50:36.051398 3268 log.go:172] (0xc000a78fd0) (0xc000701ea0) Stream removed, broadcasting: 1\nI0603 00:50:36.051417 3268 log.go:172] (0xc000a78fd0) (0xc000710e60) Stream removed, broadcasting: 3\nI0603 00:50:36.051425 3268 log.go:172] (0xc000a78fd0) (0xc000b500a0) Stream removed, broadcasting: 5\n" Jun 3 00:50:36.056: INFO: stdout: "" Jun 3 00:50:36.056: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9675 execpod-affinityckxjq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30105' Jun 3 00:50:36.263: INFO: stderr: "I0603 00:50:36.179764 3288 log.go:172] (0xc000ae91e0) (0xc000a1e460) Create stream\nI0603 00:50:36.179818 3288 log.go:172] (0xc000ae91e0) (0xc000a1e460) Stream added, broadcasting: 1\nI0603 00:50:36.185378 3288 log.go:172] (0xc000ae91e0) Reply frame received for 1\nI0603 00:50:36.185447 3288 log.go:172] (0xc000ae91e0) (0xc0006b2640) Create stream\nI0603 00:50:36.185465 3288 log.go:172] (0xc000ae91e0) (0xc0006b2640) Stream added, broadcasting: 3\nI0603 00:50:36.186432 3288 log.go:172] (0xc000ae91e0) Reply frame received for 3\nI0603 00:50:36.186474 3288 log.go:172] (0xc000ae91e0) (0xc000550320) Create stream\nI0603 00:50:36.186490 3288 log.go:172] (0xc000ae91e0) (0xc000550320) Stream added, broadcasting: 5\nI0603 00:50:36.187553 3288 log.go:172] (0xc000ae91e0) Reply frame received for 5\nI0603 00:50:36.254839 3288 log.go:172] (0xc000ae91e0) Data frame received for 3\nI0603 00:50:36.254894 3288 log.go:172] (0xc0006b2640) (3) Data frame handling\nI0603 00:50:36.254929 3288 log.go:172] (0xc000ae91e0) Data frame received for 5\nI0603 00:50:36.254948 3288 log.go:172] (0xc000550320) (5) Data frame handling\nI0603 00:50:36.254967 3288 log.go:172] (0xc000550320) (5) Data frame sent\nI0603 00:50:36.254983 3288 log.go:172] (0xc000ae91e0) Data frame received for 5\nI0603 00:50:36.254998 3288 log.go:172] (0xc000550320) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30105\nConnection to 172.17.0.13 30105 port [tcp/30105] succeeded!\nI0603 00:50:36.256530 3288 log.go:172] (0xc000ae91e0) Data frame received for 1\nI0603 00:50:36.256560 3288 log.go:172] (0xc000a1e460) (1) Data frame handling\nI0603 00:50:36.256580 3288 log.go:172] (0xc000a1e460) (1) Data frame sent\nI0603 00:50:36.256593 3288 log.go:172] (0xc000ae91e0) (0xc000a1e460) Stream removed, broadcasting: 1\nI0603 00:50:36.256607 3288 log.go:172] (0xc000ae91e0) Go away received\nI0603 00:50:36.257365 3288 log.go:172] (0xc000ae91e0) (0xc000a1e460) Stream removed, broadcasting: 1\nI0603 00:50:36.257425 3288 log.go:172] (0xc000ae91e0) (0xc0006b2640) Stream removed, broadcasting: 3\nI0603 00:50:36.257439 3288 log.go:172] (0xc000ae91e0) (0xc000550320) Stream removed, broadcasting: 5\n" Jun 3 00:50:36.263: INFO: stdout: "" Jun 3 00:50:36.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9675 execpod-affinityckxjq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30105' Jun 3 00:50:36.475: INFO: stderr: "I0603 00:50:36.396352 3308 log.go:172] (0xc000a9f3f0) (0xc000afa280) Create stream\nI0603 00:50:36.396413 3308 log.go:172] (0xc000a9f3f0) (0xc000afa280) Stream added, broadcasting: 1\nI0603 00:50:36.402319 3308 log.go:172] (0xc000a9f3f0) Reply frame received for 1\nI0603 00:50:36.402375 3308 log.go:172] (0xc000a9f3f0) (0xc00070a640) Create stream\nI0603 00:50:36.402402 3308 log.go:172] (0xc000a9f3f0) (0xc00070a640) Stream added, broadcasting: 3\nI0603 00:50:36.403081 3308 log.go:172] (0xc000a9f3f0) Reply frame received for 3\nI0603 00:50:36.403114 3308 log.go:172] (0xc000a9f3f0) (0xc0004e25a0) Create stream\nI0603 00:50:36.403127 3308 log.go:172] (0xc000a9f3f0) (0xc0004e25a0) Stream added, broadcasting: 5\nI0603 00:50:36.403881 3308 log.go:172] (0xc000a9f3f0) Reply frame received for 5\nI0603 00:50:36.470222 3308 log.go:172] (0xc000a9f3f0) Data frame received for 3\nI0603 00:50:36.470269 3308 log.go:172] (0xc00070a640) (3) Data frame handling\nI0603 00:50:36.470297 3308 log.go:172] (0xc000a9f3f0) Data frame received for 5\nI0603 00:50:36.470311 3308 log.go:172] (0xc0004e25a0) (5) Data frame handling\nI0603 00:50:36.470330 3308 log.go:172] (0xc0004e25a0) (5) Data frame sent\nI0603 00:50:36.470340 3308 log.go:172] (0xc000a9f3f0) Data frame received for 5\nI0603 00:50:36.470354 3308 log.go:172] (0xc0004e25a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30105\nConnection to 172.17.0.12 30105 port [tcp/30105] succeeded!\nI0603 00:50:36.471341 3308 log.go:172] (0xc000a9f3f0) Data frame received for 1\nI0603 00:50:36.471369 3308 log.go:172] (0xc000afa280) (1) Data frame handling\nI0603 00:50:36.471379 3308 log.go:172] (0xc000afa280) (1) Data frame sent\nI0603 00:50:36.471390 3308 log.go:172] (0xc000a9f3f0) (0xc000afa280) Stream removed, broadcasting: 1\nI0603 00:50:36.471408 3308 log.go:172] (0xc000a9f3f0) Go away received\nI0603 00:50:36.471751 3308 log.go:172] (0xc000a9f3f0) (0xc000afa280) Stream removed, broadcasting: 1\nI0603 00:50:36.471766 3308 log.go:172] (0xc000a9f3f0) (0xc00070a640) Stream removed, broadcasting: 3\nI0603 00:50:36.471773 3308 log.go:172] (0xc000a9f3f0) (0xc0004e25a0) Stream removed, broadcasting: 5\n" Jun 3 00:50:36.475: INFO: stdout: "" Jun 3 00:50:36.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9675 execpod-affinityckxjq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30105/ ; done' Jun 3 00:50:36.808: INFO: stderr: "I0603 00:50:36.633966 3329 log.go:172] (0xc000ad7290) (0xc000bf40a0) Create stream\nI0603 00:50:36.634017 3329 log.go:172] (0xc000ad7290) (0xc000bf40a0) Stream added, broadcasting: 1\nI0603 00:50:36.639488 3329 log.go:172] (0xc000ad7290) Reply frame received for 1\nI0603 00:50:36.639557 3329 log.go:172] (0xc000ad7290) (0xc0006c1ea0) Create stream\nI0603 00:50:36.639578 3329 log.go:172] (0xc000ad7290) (0xc0006c1ea0) Stream added, broadcasting: 3\nI0603 00:50:36.640511 3329 log.go:172] (0xc000ad7290) Reply frame received for 3\nI0603 00:50:36.640536 3329 log.go:172] (0xc000ad7290) (0xc0006a6f00) Create stream\nI0603 00:50:36.640545 3329 log.go:172] (0xc000ad7290) (0xc0006a6f00) Stream added, broadcasting: 5\nI0603 00:50:36.641690 3329 log.go:172] (0xc000ad7290) Reply frame received for 5\nI0603 00:50:36.702355 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.702376 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.702398 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.702507 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.702525 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\nI0603 00:50:36.702544 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:36.708170 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.708194 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.708215 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.708700 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.708713 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\nI0603 00:50:36.708721 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:36.708755 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.708797 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.708843 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.715307 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.715334 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.715356 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.716031 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.716056 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\nI0603 00:50:36.716064 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:36.716082 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.716110 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.716129 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.720607 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.720636 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.720656 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.721321 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.721345 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\nI0603 00:50:36.721357 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:36.721370 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.721378 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.721386 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.727455 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.727486 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.727515 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.728321 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.728334 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.728352 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.728381 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:36.728399 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.728419 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\nI0603 00:50:36.733860 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.733878 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.733892 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.734384 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.734405 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\nI0603 00:50:36.734415 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:36.734437 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.734456 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.734531 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.739706 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.739729 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.739747 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.739976 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.739986 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\nI0603 00:50:36.739991 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:36.740031 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.740047 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.740063 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.745722 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.745733 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.745739 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.746409 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.746474 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\nI0603 00:50:36.746495 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\nI0603 00:50:36.746533 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.746552 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\nI0603 00:50:36.746579 3329 log.go:172] (0xc000ad7290) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/I0603 00:50:36.746603 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.746635 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\n\nI0603 00:50:36.746700 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\nI0603 00:50:36.749790 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.749802 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.749807 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.750117 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.750126 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.750132 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.750140 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.750144 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\nI0603 00:50:36.750160 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:36.754763 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.754780 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.754799 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.755522 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.755544 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\nI0603 00:50:36.755559 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:36.755585 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.755614 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.755629 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.759733 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.759752 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.759761 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.760404 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.760438 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.760458 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.760497 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.760519 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\nI0603 00:50:36.760541 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:36.767584 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.767599 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.767608 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.768287 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.768311 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.768324 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.768350 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.768371 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\nI0603 00:50:36.768407 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\nI0603 00:50:36.768436 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.768456 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:36.768505 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\nI0603 00:50:36.773615 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.773635 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.773660 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.774175 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.774207 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\nI0603 00:50:36.774219 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\nI0603 00:50:36.774232 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.774242 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:36.774262 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\nI0603 00:50:36.774280 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.774304 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.774323 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.779257 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.779282 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.779313 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.779833 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.779865 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.779892 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.779914 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.779938 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\nI0603 00:50:36.779965 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\nI0603 00:50:36.779988 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.780015 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:36.780088 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\nI0603 00:50:36.784508 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.784546 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.784583 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.785362 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.785382 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.785405 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.785425 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.785434 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\nI0603 00:50:36.785444 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\nI0603 00:50:36.785472 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.785485 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:36.785503 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\nI0603 00:50:36.790484 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.790535 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.790566 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.790945 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.790983 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\nI0603 00:50:36.791012 3329 log.go:172] (0xc0006a6f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:36.791038 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.791051 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.791089 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.798488 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.798514 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.798532 3329 log.go:172] (0xc0006c1ea0) (3) Data frame sent\nI0603 00:50:36.799296 3329 log.go:172] (0xc000ad7290) Data frame received for 5\nI0603 00:50:36.799355 3329 log.go:172] (0xc0006a6f00) (5) Data frame handling\nI0603 00:50:36.799391 3329 log.go:172] (0xc000ad7290) Data frame received for 3\nI0603 00:50:36.799415 3329 log.go:172] (0xc0006c1ea0) (3) Data frame handling\nI0603 00:50:36.801712 3329 log.go:172] (0xc000ad7290) Data frame received for 1\nI0603 00:50:36.801746 3329 log.go:172] (0xc000bf40a0) (1) Data frame handling\nI0603 00:50:36.801770 3329 log.go:172] (0xc000bf40a0) (1) Data frame sent\nI0603 00:50:36.801801 3329 log.go:172] (0xc000ad7290) (0xc000bf40a0) Stream removed, broadcasting: 1\nI0603 00:50:36.801860 3329 log.go:172] (0xc000ad7290) Go away received\nI0603 00:50:36.802232 3329 log.go:172] (0xc000ad7290) (0xc000bf40a0) Stream removed, broadcasting: 1\nI0603 00:50:36.802256 3329 log.go:172] (0xc000ad7290) (0xc0006c1ea0) Stream removed, broadcasting: 3\nI0603 00:50:36.802268 3329 log.go:172] (0xc000ad7290) (0xc0006a6f00) Stream removed, broadcasting: 5\n" Jun 3 00:50:36.809: INFO: stdout: "\naffinity-nodeport-transition-279tw\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-279tw\naffinity-nodeport-transition-jgdxh\naffinity-nodeport-transition-jgdxh\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-279tw\naffinity-nodeport-transition-jgdxh\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-279tw\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-279tw" Jun 3 00:50:36.809: INFO: Received response from host: Jun 3 00:50:36.809: INFO: Received response from host: affinity-nodeport-transition-279tw Jun 3 00:50:36.809: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:36.809: INFO: Received response from host: affinity-nodeport-transition-279tw Jun 3 00:50:36.809: INFO: Received response from host: affinity-nodeport-transition-jgdxh Jun 3 00:50:36.809: INFO: Received response from host: affinity-nodeport-transition-jgdxh Jun 3 00:50:36.809: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:36.809: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:36.809: INFO: Received response from host: affinity-nodeport-transition-279tw Jun 3 00:50:36.809: INFO: Received response from host: affinity-nodeport-transition-jgdxh Jun 3 00:50:36.809: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:36.809: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:36.809: INFO: Received response from host: affinity-nodeport-transition-279tw Jun 3 00:50:36.809: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:36.809: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:36.809: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:36.809: INFO: Received response from host: affinity-nodeport-transition-279tw Jun 3 00:50:36.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9675 execpod-affinityckxjq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30105/ ; done' Jun 3 00:50:37.137: INFO: stderr: "I0603 00:50:36.964251 3348 log.go:172] (0xc000c14dc0) (0xc000af0500) Create stream\nI0603 00:50:36.964306 3348 log.go:172] (0xc000c14dc0) (0xc000af0500) Stream added, broadcasting: 1\nI0603 00:50:36.968310 3348 log.go:172] (0xc000c14dc0) Reply frame received for 1\nI0603 00:50:36.968351 3348 log.go:172] (0xc000c14dc0) (0xc0007f5040) Create stream\nI0603 00:50:36.968403 3348 log.go:172] (0xc000c14dc0) (0xc0007f5040) Stream added, broadcasting: 3\nI0603 00:50:36.969256 3348 log.go:172] (0xc000c14dc0) Reply frame received for 3\nI0603 00:50:36.969320 3348 log.go:172] (0xc000c14dc0) (0xc0007f55e0) Create stream\nI0603 00:50:36.969348 3348 log.go:172] (0xc000c14dc0) (0xc0007f55e0) Stream added, broadcasting: 5\nI0603 00:50:36.970651 3348 log.go:172] (0xc000c14dc0) Reply frame received for 5\nI0603 00:50:37.029435 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.029470 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.029482 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.029516 3348 log.go:172] (0xc000c14dc0) Data frame received for 5\nI0603 00:50:37.029559 3348 log.go:172] (0xc0007f55e0) (5) Data frame handling\nI0603 00:50:37.029583 3348 log.go:172] (0xc0007f55e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:37.036410 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.036434 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.036451 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.037264 3348 log.go:172] (0xc000c14dc0) Data frame received for 5\nI0603 00:50:37.037296 3348 log.go:172] (0xc0007f55e0) (5) Data frame handling\nI0603 00:50:37.037309 3348 log.go:172] (0xc0007f55e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:37.037350 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.037364 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.037380 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.043010 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.043027 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.043038 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.043718 3348 log.go:172] (0xc000c14dc0) Data frame received for 5\nI0603 00:50:37.043734 3348 log.go:172] (0xc0007f55e0) (5) Data frame handling\nI0603 00:50:37.043748 3348 log.go:172] (0xc0007f55e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:37.043809 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.043827 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.043845 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.047204 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.047219 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.047227 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.048082 3348 log.go:172] (0xc000c14dc0) Data frame received for 5\nI0603 00:50:37.048107 3348 log.go:172] (0xc0007f55e0) (5) Data frame handling\nI0603 00:50:37.048120 3348 log.go:172] (0xc0007f55e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:37.048138 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.048152 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.048163 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.055402 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.055425 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.055445 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.056213 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.056231 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.056252 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.056277 3348 log.go:172] (0xc000c14dc0) Data frame received for 5\nI0603 00:50:37.056293 3348 log.go:172] (0xc0007f55e0) (5) Data frame handling\nI0603 00:50:37.056310 3348 log.go:172] (0xc0007f55e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:37.060720 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.060748 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.060773 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.061790 3348 log.go:172] (0xc000c14dc0) Data frame received for 5\nI0603 00:50:37.061827 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.061867 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.061895 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.061930 3348 log.go:172] (0xc0007f55e0) (5) Data frame handling\nI0603 00:50:37.061971 3348 log.go:172] (0xc0007f55e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:37.067340 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.067357 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.067374 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.068016 3348 log.go:172] (0xc000c14dc0) Data frame received for 5\nI0603 00:50:37.068034 3348 log.go:172] (0xc0007f55e0) (5) Data frame handling\nI0603 00:50:37.068051 3348 log.go:172] (0xc0007f55e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:37.068113 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.068128 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.068142 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.073978 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.073997 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.074011 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.074466 3348 log.go:172] (0xc000c14dc0) Data frame received for 5\nI0603 00:50:37.074480 3348 log.go:172] (0xc0007f55e0) (5) Data frame handling\nI0603 00:50:37.074496 3348 log.go:172] (0xc0007f55e0) (5) Data frame sent\nI0603 00:50:37.074504 3348 log.go:172] (0xc000c14dc0) Data frame received for 5\nI0603 00:50:37.074510 3348 log.go:172] (0xc0007f55e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:37.074530 3348 log.go:172] (0xc0007f55e0) (5) Data frame sent\nI0603 00:50:37.074544 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.074554 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.074567 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.078321 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.078333 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.078339 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.079036 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.079046 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.079053 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.079089 3348 log.go:172] (0xc000c14dc0) Data frame received for 5\nI0603 00:50:37.079106 3348 log.go:172] (0xc0007f55e0) (5) Data frame handling\nI0603 00:50:37.079121 3348 log.go:172] (0xc0007f55e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:37.083518 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.083571 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.083623 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.084563 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.084576 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.084583 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.084597 3348 log.go:172] (0xc000c14dc0) Data frame received for 5\nI0603 00:50:37.084620 3348 log.go:172] (0xc0007f55e0) (5) Data frame handling\nI0603 00:50:37.084637 3348 log.go:172] (0xc0007f55e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:37.089724 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.089739 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.089750 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.090203 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.090236 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.090256 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.090280 3348 log.go:172] (0xc000c14dc0) Data frame received for 5\nI0603 00:50:37.090297 3348 log.go:172] (0xc0007f55e0) (5) Data frame handling\nI0603 00:50:37.090319 3348 log.go:172] (0xc0007f55e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:37.096831 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.096864 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.096889 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.097787 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.097822 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.097838 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.097853 3348 log.go:172] (0xc000c14dc0) Data frame received for 5\nI0603 00:50:37.097874 3348 log.go:172] (0xc0007f55e0) (5) Data frame handling\nI0603 00:50:37.097908 3348 log.go:172] (0xc0007f55e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:37.103034 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.103058 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.103078 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.103583 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.103605 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.103633 3348 log.go:172] (0xc000c14dc0) Data frame received for 5\nI0603 00:50:37.103666 3348 log.go:172] (0xc0007f55e0) (5) Data frame handling\nI0603 00:50:37.103686 3348 log.go:172] (0xc0007f55e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:37.103709 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.110466 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.110491 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.110517 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.111148 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.111177 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.111192 3348 log.go:172] (0xc000c14dc0) Data frame received for 5\nI0603 00:50:37.111211 3348 log.go:172] (0xc0007f55e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:37.111220 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.111248 3348 log.go:172] (0xc0007f55e0) (5) Data frame sent\nI0603 00:50:37.116333 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.116345 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.116352 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.116745 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.116769 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.116777 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.116786 3348 log.go:172] (0xc000c14dc0) Data frame received for 5\nI0603 00:50:37.116794 3348 log.go:172] (0xc0007f55e0) (5) Data frame handling\nI0603 00:50:37.116801 3348 log.go:172] (0xc0007f55e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:37.122499 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.122519 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.122537 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.122974 3348 log.go:172] (0xc000c14dc0) Data frame received for 5\nI0603 00:50:37.122993 3348 log.go:172] (0xc0007f55e0) (5) Data frame handling\nI0603 00:50:37.123000 3348 log.go:172] (0xc0007f55e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30105/\nI0603 00:50:37.123023 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.123055 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.123076 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.128249 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.128270 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.128288 3348 log.go:172] (0xc0007f5040) (3) Data frame sent\nI0603 00:50:37.128966 3348 log.go:172] (0xc000c14dc0) Data frame received for 5\nI0603 00:50:37.128986 3348 log.go:172] (0xc0007f55e0) (5) Data frame handling\nI0603 00:50:37.129012 3348 log.go:172] (0xc000c14dc0) Data frame received for 3\nI0603 00:50:37.129037 3348 log.go:172] (0xc0007f5040) (3) Data frame handling\nI0603 00:50:37.131021 3348 log.go:172] (0xc000c14dc0) Data frame received for 1\nI0603 00:50:37.131049 3348 log.go:172] (0xc000af0500) (1) Data frame handling\nI0603 00:50:37.131074 3348 log.go:172] (0xc000af0500) (1) Data frame sent\nI0603 00:50:37.131201 3348 log.go:172] (0xc000c14dc0) (0xc000af0500) Stream removed, broadcasting: 1\nI0603 00:50:37.131599 3348 log.go:172] (0xc000c14dc0) (0xc000af0500) Stream removed, broadcasting: 1\nI0603 00:50:37.131626 3348 log.go:172] (0xc000c14dc0) (0xc0007f5040) Stream removed, broadcasting: 3\nI0603 00:50:37.131822 3348 log.go:172] (0xc000c14dc0) (0xc0007f55e0) Stream removed, broadcasting: 5\n" Jun 3 00:50:37.138: INFO: stdout: "\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm\naffinity-nodeport-transition-b8vlm" Jun 3 00:50:37.138: INFO: Received response from host: Jun 3 00:50:37.138: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:37.138: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:37.138: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:37.138: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:37.138: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:37.138: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:37.138: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:37.138: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:37.138: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:37.138: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:37.138: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:37.138: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:37.138: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:37.138: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:37.138: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:37.138: INFO: Received response from host: affinity-nodeport-transition-b8vlm Jun 3 00:50:37.138: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-9675, will wait for the garbage collector to delete the pods Jun 3 00:50:37.634: INFO: Deleting ReplicationController affinity-nodeport-transition took: 386.192107ms Jun 3 00:50:38.035: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 400.271804ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:50:45.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9675" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:21.039 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":258,"skipped":4121,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:50:45.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-zmtd STEP: Creating a pod to test atomic-volume-subpath Jun 3 00:50:45.537: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-zmtd" in namespace "subpath-899" to be "Succeeded or Failed" Jun 3 00:50:45.560: INFO: Pod "pod-subpath-test-secret-zmtd": Phase="Pending", Reason="", readiness=false. Elapsed: 23.031232ms Jun 3 00:50:47.683: INFO: Pod "pod-subpath-test-secret-zmtd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145387576s Jun 3 00:50:49.687: INFO: Pod "pod-subpath-test-secret-zmtd": Phase="Running", Reason="", readiness=true. Elapsed: 4.14957352s Jun 3 00:50:51.691: INFO: Pod "pod-subpath-test-secret-zmtd": Phase="Running", Reason="", readiness=true. Elapsed: 6.153530815s Jun 3 00:50:53.699: INFO: Pod "pod-subpath-test-secret-zmtd": Phase="Running", Reason="", readiness=true. Elapsed: 8.161769272s Jun 3 00:50:55.703: INFO: Pod "pod-subpath-test-secret-zmtd": Phase="Running", Reason="", readiness=true. Elapsed: 10.16617947s Jun 3 00:50:57.707: INFO: Pod "pod-subpath-test-secret-zmtd": Phase="Running", Reason="", readiness=true. Elapsed: 12.169962516s Jun 3 00:50:59.711: INFO: Pod "pod-subpath-test-secret-zmtd": Phase="Running", Reason="", readiness=true. Elapsed: 14.173858864s Jun 3 00:51:01.716: INFO: Pod "pod-subpath-test-secret-zmtd": Phase="Running", Reason="", readiness=true. Elapsed: 16.178328975s Jun 3 00:51:03.720: INFO: Pod "pod-subpath-test-secret-zmtd": Phase="Running", Reason="", readiness=true. Elapsed: 18.182484448s Jun 3 00:51:05.723: INFO: Pod "pod-subpath-test-secret-zmtd": Phase="Running", Reason="", readiness=true. Elapsed: 20.185682456s Jun 3 00:51:07.727: INFO: Pod "pod-subpath-test-secret-zmtd": Phase="Running", Reason="", readiness=true. Elapsed: 22.189811848s Jun 3 00:51:09.731: INFO: Pod "pod-subpath-test-secret-zmtd": Phase="Running", Reason="", readiness=true. Elapsed: 24.194126922s Jun 3 00:51:11.736: INFO: Pod "pod-subpath-test-secret-zmtd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.198317631s STEP: Saw pod success Jun 3 00:51:11.736: INFO: Pod "pod-subpath-test-secret-zmtd" satisfied condition "Succeeded or Failed" Jun 3 00:51:11.738: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-zmtd container test-container-subpath-secret-zmtd: STEP: delete the pod Jun 3 00:51:11.770: INFO: Waiting for pod pod-subpath-test-secret-zmtd to disappear Jun 3 00:51:11.782: INFO: Pod pod-subpath-test-secret-zmtd no longer exists STEP: Deleting pod pod-subpath-test-secret-zmtd Jun 3 00:51:11.782: INFO: Deleting pod "pod-subpath-test-secret-zmtd" in namespace "subpath-899" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:51:11.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-899" for this suite. • [SLOW TEST:26.394 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":259,"skipped":4139,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:51:11.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod Jun 3 00:51:11.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-7120 -- logs-generator --log-lines-total 100 --run-duration 20s' Jun 3 00:51:11.977: INFO: stderr: "" Jun 3 00:51:11.977: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Jun 3 00:51:11.978: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jun 3 00:51:11.978: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7120" to be "running and ready, or succeeded" Jun 3 00:51:11.985: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.502165ms Jun 3 00:51:13.990: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012393313s Jun 3 00:51:15.995: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.01704754s Jun 3 00:51:15.995: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jun 3 00:51:15.995: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jun 3 00:51:15.995: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7120' Jun 3 00:51:16.103: INFO: stderr: "" Jun 3 00:51:16.103: INFO: stdout: "I0603 00:51:14.305290 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/l6v8 442\nI0603 00:51:14.505454 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/4tt 201\nI0603 00:51:14.705584 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/2jm 467\nI0603 00:51:14.905464 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/t88 286\nI0603 00:51:15.105491 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/2fs 268\nI0603 00:51:15.305444 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/fzc 574\nI0603 00:51:15.505551 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/m46b 495\nI0603 00:51:15.705466 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/c96 384\nI0603 00:51:15.905505 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/4c9 288\n" STEP: limiting log lines Jun 3 00:51:16.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7120 --tail=1' Jun 3 00:51:16.215: INFO: stderr: "" Jun 3 00:51:16.215: INFO: stdout: "I0603 00:51:16.105454 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/2dz 474\n" Jun 3 00:51:16.215: INFO: got output "I0603 00:51:16.105454 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/2dz 474\n" STEP: limiting log bytes Jun 3 00:51:16.215: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7120 --limit-bytes=1' Jun 3 00:51:16.343: INFO: stderr: "" Jun 3 00:51:16.343: INFO: stdout: "I" Jun 3 00:51:16.343: INFO: got output "I" STEP: exposing timestamps Jun 3 00:51:16.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7120 --tail=1 --timestamps' Jun 3 00:51:16.455: INFO: stderr: "" Jun 3 00:51:16.456: INFO: stdout: "2020-06-03T00:51:16.305753546Z I0603 00:51:16.305512 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/5g9s 346\n" Jun 3 00:51:16.456: INFO: got output "2020-06-03T00:51:16.305753546Z I0603 00:51:16.305512 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/5g9s 346\n" STEP: restricting to a time range Jun 3 00:51:18.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7120 --since=1s' Jun 3 00:51:20.944: INFO: stderr: "" Jun 3 00:51:20.944: INFO: stdout: "I0603 00:51:18.905531 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/lg8 501\nI0603 00:51:19.105469 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/kube-system/pods/8q6j 523\nI0603 00:51:19.305480 1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/rrd4 598\nI0603 00:51:19.505517 1 logs_generator.go:76] 26 POST /api/v1/namespaces/default/pods/lr7r 441\nI0603 00:51:19.705493 1 logs_generator.go:76] 27 GET /api/v1/namespaces/ns/pods/qlhw 211\nI0603 00:51:19.905483 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/zm2 271\nI0603 00:51:20.105496 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/lpf7 529\nI0603 00:51:20.305456 1 logs_generator.go:76] 30 POST /api/v1/namespaces/default/pods/69l 597\nI0603 00:51:20.505528 1 logs_generator.go:76] 31 POST /api/v1/namespaces/default/pods/bkcj 516\nI0603 00:51:20.705456 1 logs_generator.go:76] 32 PUT /api/v1/namespaces/ns/pods/fsw 471\nI0603 00:51:20.905465 1 logs_generator.go:76] 33 GET /api/v1/namespaces/ns/pods/p957 592\n" Jun 3 00:51:20.945: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7120 --since=24h' Jun 3 00:51:21.074: INFO: stderr: "" Jun 3 00:51:21.074: INFO: stdout: "I0603 00:51:14.305290 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/l6v8 442\nI0603 00:51:14.505454 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/4tt 201\nI0603 00:51:14.705584 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/2jm 467\nI0603 00:51:14.905464 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/t88 286\nI0603 00:51:15.105491 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/2fs 268\nI0603 00:51:15.305444 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/fzc 574\nI0603 00:51:15.505551 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/m46b 495\nI0603 00:51:15.705466 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/c96 384\nI0603 00:51:15.905505 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/4c9 288\nI0603 00:51:16.105454 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/2dz 474\nI0603 00:51:16.305512 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/5g9s 346\nI0603 00:51:16.505462 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/kl42 555\nI0603 00:51:16.705554 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/vrpl 273\nI0603 00:51:16.905494 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/kmvg 472\nI0603 00:51:17.105456 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/l6d 407\nI0603 00:51:17.305484 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/rgft 479\nI0603 00:51:17.505511 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/dpxq 584\nI0603 00:51:17.705489 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/sbxn 225\nI0603 00:51:17.905490 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/pvn 449\nI0603 00:51:18.105482 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/7wmp 365\nI0603 00:51:18.305486 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/p8c 354\nI0603 00:51:18.505509 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/n78l 409\nI0603 00:51:18.705557 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/p2pd 573\nI0603 00:51:18.905531 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/lg8 501\nI0603 00:51:19.105469 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/kube-system/pods/8q6j 523\nI0603 00:51:19.305480 1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/rrd4 598\nI0603 00:51:19.505517 1 logs_generator.go:76] 26 POST /api/v1/namespaces/default/pods/lr7r 441\nI0603 00:51:19.705493 1 logs_generator.go:76] 27 GET /api/v1/namespaces/ns/pods/qlhw 211\nI0603 00:51:19.905483 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/zm2 271\nI0603 00:51:20.105496 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/lpf7 529\nI0603 00:51:20.305456 1 logs_generator.go:76] 30 POST /api/v1/namespaces/default/pods/69l 597\nI0603 00:51:20.505528 1 logs_generator.go:76] 31 POST /api/v1/namespaces/default/pods/bkcj 516\nI0603 00:51:20.705456 1 logs_generator.go:76] 32 PUT /api/v1/namespaces/ns/pods/fsw 471\nI0603 00:51:20.905465 1 logs_generator.go:76] 33 GET /api/v1/namespaces/ns/pods/p957 592\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 Jun 3 00:51:21.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-7120' Jun 3 00:51:23.934: INFO: stderr: "" Jun 3 00:51:23.934: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:51:23.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7120" for this suite. • [SLOW TEST:12.126 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":260,"skipped":4176,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:51:23.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Jun 3 00:51:23.997: INFO: namespace kubectl-1670 Jun 3 00:51:23.997: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1670' Jun 3 00:51:24.320: INFO: stderr: "" Jun 3 00:51:24.320: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 3 00:51:25.324: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 00:51:25.325: INFO: Found 0 / 1 Jun 3 00:51:26.325: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 00:51:26.325: INFO: Found 0 / 1 Jun 3 00:51:27.325: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 00:51:27.325: INFO: Found 1 / 1 Jun 3 00:51:27.325: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 3 00:51:27.329: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 00:51:27.329: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 3 00:51:27.329: INFO: wait on agnhost-master startup in kubectl-1670 Jun 3 00:51:27.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-fjzzq agnhost-master --namespace=kubectl-1670' Jun 3 00:51:27.440: INFO: stderr: "" Jun 3 00:51:27.440: INFO: stdout: "Paused\n" STEP: exposing RC Jun 3 00:51:27.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1670' Jun 3 00:51:27.652: INFO: stderr: "" Jun 3 00:51:27.652: INFO: stdout: "service/rm2 exposed\n" Jun 3 00:51:27.655: INFO: Service rm2 in namespace kubectl-1670 found. STEP: exposing service Jun 3 00:51:29.663: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1670' Jun 3 00:51:29.812: INFO: stderr: "" Jun 3 00:51:29.812: INFO: stdout: "service/rm3 exposed\n" Jun 3 00:51:29.833: INFO: Service rm3 in namespace kubectl-1670 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:51:31.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1670" for this suite. • [SLOW TEST:7.908 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":261,"skipped":4184,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:51:31.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 00:51:31.916: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jun 3 00:51:33.984: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:51:34.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6139" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":262,"skipped":4205,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:51:35.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4250.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4250.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 00:51:43.594: INFO: DNS probes using dns-4250/dns-test-850ceef0-fc63-493d-ae2e-8d3e1b11374a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:51:43.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4250" for this suite. • [SLOW TEST:8.679 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":263,"skipped":4241,"failed":0} SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:51:43.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jun 3 00:51:43.735: INFO: PodSpec: initContainers in spec.initContainers Jun 3 00:52:33.407: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-acc85943-13c5-4e7a-ac84-83e9a6de3ee9", GenerateName:"", Namespace:"init-container-3543", SelfLink:"/api/v1/namespaces/init-container-3543/pods/pod-init-acc85943-13c5-4e7a-ac84-83e9a6de3ee9", UID:"50553629-d9e8-4bfb-8d94-916cba00f637", ResourceVersion:"9819178", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726742303, loc:(*time.Location)(0x7c342a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"735865264"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0041838e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004183900)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004183920), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004183940)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-fbdrd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc006e0eec0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fbdrd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fbdrd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fbdrd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00364f318), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000ad8690), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00364f3a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00364f3c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00364f3c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00364f3cc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726742304, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726742304, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726742304, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726742303, loc:(*time.Location)(0x7c342a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.1.253", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.253"}}, StartTime:(*v1.Time)(0xc004183960), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0041839a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000ad87e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://e089c3800c63d32c151058a42d70cd70eb73bc649fbb1e1b9cbf29595657eae0", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0041839c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004183980), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00364f44f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:52:33.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3543" for this suite. • [SLOW TEST:49.766 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":264,"skipped":4245,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:52:33.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:52:46.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4158" for this suite. • [SLOW TEST:13.231 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":265,"skipped":4246,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:52:46.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5192 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-5192 I0603 00:52:46.892985 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5192, replica count: 2 I0603 00:52:49.943598 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 00:52:52.943853 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 00:52:52.943: INFO: Creating new exec pod Jun 3 00:52:57.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5192 execpodks5vs -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jun 3 00:53:00.834: INFO: stderr: "I0603 00:53:00.733654 3615 log.go:172] (0xc00093a6e0) (0xc0006e0a00) Create stream\nI0603 00:53:00.733704 3615 log.go:172] (0xc00093a6e0) (0xc0006e0a00) Stream added, broadcasting: 1\nI0603 00:53:00.736029 3615 log.go:172] (0xc00093a6e0) Reply frame received for 1\nI0603 00:53:00.736078 3615 log.go:172] (0xc00093a6e0) (0xc00065cc80) Create stream\nI0603 00:53:00.736092 3615 log.go:172] (0xc00093a6e0) (0xc00065cc80) Stream added, broadcasting: 3\nI0603 00:53:00.737360 3615 log.go:172] (0xc00093a6e0) Reply frame received for 3\nI0603 00:53:00.737421 3615 log.go:172] (0xc00093a6e0) (0xc0006e0f00) Create stream\nI0603 00:53:00.737449 3615 log.go:172] (0xc00093a6e0) (0xc0006e0f00) Stream added, broadcasting: 5\nI0603 00:53:00.738682 3615 log.go:172] (0xc00093a6e0) Reply frame received for 5\nI0603 00:53:00.824197 3615 log.go:172] (0xc00093a6e0) Data frame received for 5\nI0603 00:53:00.824247 3615 log.go:172] (0xc0006e0f00) (5) Data frame handling\nI0603 00:53:00.824286 3615 log.go:172] (0xc0006e0f00) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0603 00:53:00.824402 3615 log.go:172] (0xc00093a6e0) Data frame received for 5\nI0603 00:53:00.824428 3615 log.go:172] (0xc0006e0f00) (5) Data frame handling\nI0603 00:53:00.824453 3615 log.go:172] (0xc0006e0f00) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0603 00:53:00.824781 3615 log.go:172] (0xc00093a6e0) Data frame received for 5\nI0603 00:53:00.824821 3615 log.go:172] (0xc0006e0f00) (5) Data frame handling\nI0603 00:53:00.824874 3615 log.go:172] (0xc00093a6e0) Data frame received for 3\nI0603 00:53:00.824905 3615 log.go:172] (0xc00065cc80) (3) Data frame handling\nI0603 00:53:00.826748 3615 log.go:172] (0xc00093a6e0) Data frame received for 1\nI0603 00:53:00.826782 3615 log.go:172] (0xc0006e0a00) (1) Data frame handling\nI0603 00:53:00.826803 3615 log.go:172] (0xc0006e0a00) (1) Data frame sent\nI0603 00:53:00.826815 3615 log.go:172] (0xc00093a6e0) (0xc0006e0a00) Stream removed, broadcasting: 1\nI0603 00:53:00.826958 3615 log.go:172] (0xc00093a6e0) Go away received\nI0603 00:53:00.827429 3615 log.go:172] (0xc00093a6e0) (0xc0006e0a00) Stream removed, broadcasting: 1\nI0603 00:53:00.827465 3615 log.go:172] (0xc00093a6e0) (0xc00065cc80) Stream removed, broadcasting: 3\nI0603 00:53:00.827496 3615 log.go:172] (0xc00093a6e0) (0xc0006e0f00) Stream removed, broadcasting: 5\n" Jun 3 00:53:00.834: INFO: stdout: "" Jun 3 00:53:00.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5192 execpodks5vs -- /bin/sh -x -c nc -zv -t -w 2 10.104.194.86 80' Jun 3 00:53:01.088: INFO: stderr: "I0603 00:53:01.024605 3649 log.go:172] (0xc000736580) (0xc0005179a0) Create stream\nI0603 00:53:01.024683 3649 log.go:172] (0xc000736580) (0xc0005179a0) Stream added, broadcasting: 1\nI0603 00:53:01.027601 3649 log.go:172] (0xc000736580) Reply frame received for 1\nI0603 00:53:01.027669 3649 log.go:172] (0xc000736580) (0xc0004f8280) Create stream\nI0603 00:53:01.027690 3649 log.go:172] (0xc000736580) (0xc0004f8280) Stream added, broadcasting: 3\nI0603 00:53:01.029306 3649 log.go:172] (0xc000736580) Reply frame received for 3\nI0603 00:53:01.029435 3649 log.go:172] (0xc000736580) (0xc0003cedc0) Create stream\nI0603 00:53:01.029495 3649 log.go:172] (0xc000736580) (0xc0003cedc0) Stream added, broadcasting: 5\nI0603 00:53:01.032022 3649 log.go:172] (0xc000736580) Reply frame received for 5\nI0603 00:53:01.080164 3649 log.go:172] (0xc000736580) Data frame received for 3\nI0603 00:53:01.080225 3649 log.go:172] (0xc0004f8280) (3) Data frame handling\nI0603 00:53:01.080254 3649 log.go:172] (0xc000736580) Data frame received for 5\nI0603 00:53:01.080269 3649 log.go:172] (0xc0003cedc0) (5) Data frame handling\nI0603 00:53:01.080287 3649 log.go:172] (0xc0003cedc0) (5) Data frame sent\nI0603 00:53:01.080302 3649 log.go:172] (0xc000736580) Data frame received for 5\nI0603 00:53:01.080312 3649 log.go:172] (0xc0003cedc0) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.194.86 80\nConnection to 10.104.194.86 80 port [tcp/http] succeeded!\nI0603 00:53:01.082472 3649 log.go:172] (0xc000736580) Data frame received for 1\nI0603 00:53:01.082501 3649 log.go:172] (0xc0005179a0) (1) Data frame handling\nI0603 00:53:01.082518 3649 log.go:172] (0xc0005179a0) (1) Data frame sent\nI0603 00:53:01.082544 3649 log.go:172] (0xc000736580) (0xc0005179a0) Stream removed, broadcasting: 1\nI0603 00:53:01.082577 3649 log.go:172] (0xc000736580) Go away received\nI0603 00:53:01.082948 3649 log.go:172] (0xc000736580) (0xc0005179a0) Stream removed, broadcasting: 1\nI0603 00:53:01.082964 3649 log.go:172] (0xc000736580) (0xc0004f8280) Stream removed, broadcasting: 3\nI0603 00:53:01.082972 3649 log.go:172] (0xc000736580) (0xc0003cedc0) Stream removed, broadcasting: 5\n" Jun 3 00:53:01.088: INFO: stdout: "" Jun 3 00:53:01.088: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5192 execpodks5vs -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31177' Jun 3 00:53:01.292: INFO: stderr: "I0603 00:53:01.217879 3671 log.go:172] (0xc000a36e70) (0xc000656aa0) Create stream\nI0603 00:53:01.217941 3671 log.go:172] (0xc000a36e70) (0xc000656aa0) Stream added, broadcasting: 1\nI0603 00:53:01.222311 3671 log.go:172] (0xc000a36e70) Reply frame received for 1\nI0603 00:53:01.222333 3671 log.go:172] (0xc000a36e70) (0xc0006572c0) Create stream\nI0603 00:53:01.222340 3671 log.go:172] (0xc000a36e70) (0xc0006572c0) Stream added, broadcasting: 3\nI0603 00:53:01.223171 3671 log.go:172] (0xc000a36e70) Reply frame received for 3\nI0603 00:53:01.223186 3671 log.go:172] (0xc000a36e70) (0xc000657900) Create stream\nI0603 00:53:01.223193 3671 log.go:172] (0xc000a36e70) (0xc000657900) Stream added, broadcasting: 5\nI0603 00:53:01.223875 3671 log.go:172] (0xc000a36e70) Reply frame received for 5\nI0603 00:53:01.283894 3671 log.go:172] (0xc000a36e70) Data frame received for 5\nI0603 00:53:01.283934 3671 log.go:172] (0xc000657900) (5) Data frame handling\nI0603 00:53:01.283962 3671 log.go:172] (0xc000657900) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 31177\nConnection to 172.17.0.13 31177 port [tcp/31177] succeeded!\nI0603 00:53:01.284153 3671 log.go:172] (0xc000a36e70) Data frame received for 5\nI0603 00:53:01.284181 3671 log.go:172] (0xc000657900) (5) Data frame handling\nI0603 00:53:01.284365 3671 log.go:172] (0xc000a36e70) Data frame received for 3\nI0603 00:53:01.284387 3671 log.go:172] (0xc0006572c0) (3) Data frame handling\nI0603 00:53:01.286132 3671 log.go:172] (0xc000a36e70) Data frame received for 1\nI0603 00:53:01.286168 3671 log.go:172] (0xc000656aa0) (1) Data frame handling\nI0603 00:53:01.286195 3671 log.go:172] (0xc000656aa0) (1) Data frame sent\nI0603 00:53:01.286211 3671 log.go:172] (0xc000a36e70) (0xc000656aa0) Stream removed, broadcasting: 1\nI0603 00:53:01.286639 3671 log.go:172] (0xc000a36e70) (0xc000656aa0) Stream removed, broadcasting: 1\nI0603 00:53:01.286679 3671 log.go:172] (0xc000a36e70) (0xc0006572c0) Stream removed, broadcasting: 3\nI0603 00:53:01.286702 3671 log.go:172] (0xc000a36e70) (0xc000657900) Stream removed, broadcasting: 5\n" Jun 3 00:53:01.292: INFO: stdout: "" Jun 3 00:53:01.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5192 execpodks5vs -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31177' Jun 3 00:53:01.501: INFO: stderr: "I0603 00:53:01.434868 3690 log.go:172] (0xc0009fd550) (0xc000bce640) Create stream\nI0603 00:53:01.434939 3690 log.go:172] (0xc0009fd550) (0xc000bce640) Stream added, broadcasting: 1\nI0603 00:53:01.441433 3690 log.go:172] (0xc0009fd550) Reply frame received for 1\nI0603 00:53:01.441481 3690 log.go:172] (0xc0009fd550) (0xc00067ed20) Create stream\nI0603 00:53:01.441494 3690 log.go:172] (0xc0009fd550) (0xc00067ed20) Stream added, broadcasting: 3\nI0603 00:53:01.442421 3690 log.go:172] (0xc0009fd550) Reply frame received for 3\nI0603 00:53:01.442460 3690 log.go:172] (0xc0009fd550) (0xc000518dc0) Create stream\nI0603 00:53:01.442471 3690 log.go:172] (0xc0009fd550) (0xc000518dc0) Stream added, broadcasting: 5\nI0603 00:53:01.443432 3690 log.go:172] (0xc0009fd550) Reply frame received for 5\nI0603 00:53:01.494811 3690 log.go:172] (0xc0009fd550) Data frame received for 5\nI0603 00:53:01.494851 3690 log.go:172] (0xc000518dc0) (5) Data frame handling\nI0603 00:53:01.494875 3690 log.go:172] (0xc000518dc0) (5) Data frame sent\nI0603 00:53:01.494891 3690 log.go:172] (0xc0009fd550) Data frame received for 5\nI0603 00:53:01.494905 3690 log.go:172] (0xc000518dc0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31177\nConnection to 172.17.0.12 31177 port [tcp/31177] succeeded!\nI0603 00:53:01.494968 3690 log.go:172] (0xc0009fd550) Data frame received for 3\nI0603 00:53:01.494994 3690 log.go:172] (0xc00067ed20) (3) Data frame handling\nI0603 00:53:01.495878 3690 log.go:172] (0xc0009fd550) Data frame received for 1\nI0603 00:53:01.495894 3690 log.go:172] (0xc000bce640) (1) Data frame handling\nI0603 00:53:01.495907 3690 log.go:172] (0xc000bce640) (1) Data frame sent\nI0603 00:53:01.495926 3690 log.go:172] (0xc0009fd550) (0xc000bce640) Stream removed, broadcasting: 1\nI0603 00:53:01.496061 3690 log.go:172] (0xc0009fd550) Go away received\nI0603 00:53:01.496200 3690 log.go:172] (0xc0009fd550) (0xc000bce640) Stream removed, broadcasting: 1\nI0603 00:53:01.496218 3690 log.go:172] (0xc0009fd550) (0xc00067ed20) Stream removed, broadcasting: 3\nI0603 00:53:01.496228 3690 log.go:172] (0xc0009fd550) (0xc000518dc0) Stream removed, broadcasting: 5\n" Jun 3 00:53:01.501: INFO: stdout: "" Jun 3 00:53:01.501: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:53:01.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5192" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:14.918 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":266,"skipped":4251,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:53:01.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-8808 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 3 00:53:01.661: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 3 00:53:01.810: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 3 00:53:03.929: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 3 00:53:05.813: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:53:07.814: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:53:09.814: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:53:11.814: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:53:13.815: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:53:15.813: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 00:53:17.814: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 3 00:53:17.820: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 3 00:53:21.879: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.2 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8808 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 00:53:21.879: INFO: >>> kubeConfig: /root/.kube/config I0603 00:53:21.904998 7 log.go:172] (0xc005aa6000) (0xc0003f3680) Create stream I0603 00:53:21.905038 7 log.go:172] (0xc005aa6000) (0xc0003f3680) Stream added, broadcasting: 1 I0603 00:53:21.907653 7 log.go:172] (0xc005aa6000) Reply frame received for 1 I0603 00:53:21.907698 7 log.go:172] (0xc005aa6000) (0xc0003f3a40) Create stream I0603 00:53:21.907717 7 log.go:172] (0xc005aa6000) (0xc0003f3a40) Stream added, broadcasting: 3 I0603 00:53:21.909050 7 log.go:172] (0xc005aa6000) Reply frame received for 3 I0603 00:53:21.909089 7 log.go:172] (0xc005aa6000) (0xc0026f6320) Create stream I0603 00:53:21.909274 7 log.go:172] (0xc005aa6000) (0xc0026f6320) Stream added, broadcasting: 5 I0603 00:53:21.910401 7 log.go:172] (0xc005aa6000) Reply frame received for 5 I0603 00:53:22.991676 7 log.go:172] (0xc005aa6000) Data frame received for 3 I0603 00:53:22.991722 7 log.go:172] (0xc0003f3a40) (3) Data frame handling I0603 00:53:22.991754 7 log.go:172] (0xc0003f3a40) (3) Data frame sent I0603 00:53:22.992139 7 log.go:172] (0xc005aa6000) Data frame received for 5 I0603 00:53:22.992190 7 log.go:172] (0xc0026f6320) (5) Data frame handling I0603 00:53:22.992229 7 log.go:172] (0xc005aa6000) Data frame received for 3 I0603 00:53:22.992254 7 log.go:172] (0xc0003f3a40) (3) Data frame handling I0603 00:53:22.994610 7 log.go:172] (0xc005aa6000) Data frame received for 1 I0603 00:53:22.994650 7 log.go:172] (0xc0003f3680) (1) Data frame handling I0603 00:53:22.994673 7 log.go:172] (0xc0003f3680) (1) Data frame sent I0603 00:53:22.994714 7 log.go:172] (0xc005aa6000) (0xc0003f3680) Stream removed, broadcasting: 1 I0603 00:53:22.994829 7 log.go:172] (0xc005aa6000) (0xc0003f3680) Stream removed, broadcasting: 1 I0603 00:53:22.994863 7 log.go:172] (0xc005aa6000) (0xc0003f3a40) Stream removed, broadcasting: 3 I0603 00:53:22.994887 7 log.go:172] (0xc005aa6000) (0xc0026f6320) Stream removed, broadcasting: 5 Jun 3 00:53:22.994: INFO: Found all expected endpoints: [netserver-0] I0603 00:53:22.995352 7 log.go:172] (0xc005aa6000) Go away received Jun 3 00:53:22.999: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.197 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8808 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 00:53:22.999: INFO: >>> kubeConfig: /root/.kube/config I0603 00:53:23.040245 7 log.go:172] (0xc001416630) (0xc0026f6780) Create stream I0603 00:53:23.040281 7 log.go:172] (0xc001416630) (0xc0026f6780) Stream added, broadcasting: 1 I0603 00:53:23.043186 7 log.go:172] (0xc001416630) Reply frame received for 1 I0603 00:53:23.043235 7 log.go:172] (0xc001416630) (0xc0008b2c80) Create stream I0603 00:53:23.043249 7 log.go:172] (0xc001416630) (0xc0008b2c80) Stream added, broadcasting: 3 I0603 00:53:23.044164 7 log.go:172] (0xc001416630) Reply frame received for 3 I0603 00:53:23.044203 7 log.go:172] (0xc001416630) (0xc0026f68c0) Create stream I0603 00:53:23.044217 7 log.go:172] (0xc001416630) (0xc0026f68c0) Stream added, broadcasting: 5 I0603 00:53:23.045277 7 log.go:172] (0xc001416630) Reply frame received for 5 I0603 00:53:24.118669 7 log.go:172] (0xc001416630) Data frame received for 3 I0603 00:53:24.118699 7 log.go:172] (0xc0008b2c80) (3) Data frame handling I0603 00:53:24.118725 7 log.go:172] (0xc0008b2c80) (3) Data frame sent I0603 00:53:24.118881 7 log.go:172] (0xc001416630) Data frame received for 3 I0603 00:53:24.118902 7 log.go:172] (0xc0008b2c80) (3) Data frame handling I0603 00:53:24.118925 7 log.go:172] (0xc001416630) Data frame received for 5 I0603 00:53:24.118941 7 log.go:172] (0xc0026f68c0) (5) Data frame handling I0603 00:53:24.120067 7 log.go:172] (0xc001416630) Data frame received for 1 I0603 00:53:24.120089 7 log.go:172] (0xc0026f6780) (1) Data frame handling I0603 00:53:24.120113 7 log.go:172] (0xc0026f6780) (1) Data frame sent I0603 00:53:24.120131 7 log.go:172] (0xc001416630) (0xc0026f6780) Stream removed, broadcasting: 1 I0603 00:53:24.120147 7 log.go:172] (0xc001416630) Go away received I0603 00:53:24.120268 7 log.go:172] (0xc001416630) (0xc0026f6780) Stream removed, broadcasting: 1 I0603 00:53:24.120285 7 log.go:172] (0xc001416630) (0xc0008b2c80) Stream removed, broadcasting: 3 I0603 00:53:24.120294 7 log.go:172] (0xc001416630) (0xc0026f68c0) Stream removed, broadcasting: 5 Jun 3 00:53:24.120: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:53:24.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8808" for this suite. • [SLOW TEST:22.527 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":267,"skipped":4259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:53:24.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Jun 3 00:55:24.894: INFO: Successfully updated pod "var-expansion-9dbd63cd-04d3-461c-94be-a677ac022108" STEP: waiting for pod running STEP: deleting the pod gracefully Jun 3 00:55:26.907: INFO: Deleting pod "var-expansion-9dbd63cd-04d3-461c-94be-a677ac022108" in namespace "var-expansion-3921" Jun 3 00:55:26.914: INFO: Wait up to 5m0s for pod "var-expansion-9dbd63cd-04d3-461c-94be-a677ac022108" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:56:06.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3921" for this suite. • [SLOW TEST:162.830 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":268,"skipped":4286,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:56:06.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-f1cae8b2-39cf-4b82-8ccc-aa5ff13c2170 STEP: updating the pod Jun 3 00:56:15.611: INFO: Successfully updated pod "var-expansion-f1cae8b2-39cf-4b82-8ccc-aa5ff13c2170" STEP: waiting for pod and container restart STEP: Failing liveness probe Jun 3 00:56:15.636: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-7781 PodName:var-expansion-f1cae8b2-39cf-4b82-8ccc-aa5ff13c2170 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 00:56:15.637: INFO: >>> kubeConfig: /root/.kube/config I0603 00:56:15.671736 7 log.go:172] (0xc001416580) (0xc0026f65a0) Create stream I0603 00:56:15.671845 7 log.go:172] (0xc001416580) (0xc0026f65a0) Stream added, broadcasting: 1 I0603 00:56:15.673925 7 log.go:172] (0xc001416580) Reply frame received for 1 I0603 00:56:15.673965 7 log.go:172] (0xc001416580) (0xc002548640) Create stream I0603 00:56:15.673980 7 log.go:172] (0xc001416580) (0xc002548640) Stream added, broadcasting: 3 I0603 00:56:15.675027 7 log.go:172] (0xc001416580) Reply frame received for 3 I0603 00:56:15.675072 7 log.go:172] (0xc001416580) (0xc0026f6640) Create stream I0603 00:56:15.675085 7 log.go:172] (0xc001416580) (0xc0026f6640) Stream added, broadcasting: 5 I0603 00:56:15.675973 7 log.go:172] (0xc001416580) Reply frame received for 5 I0603 00:56:15.750040 7 log.go:172] (0xc001416580) Data frame received for 3 I0603 00:56:15.750070 7 log.go:172] (0xc002548640) (3) Data frame handling I0603 00:56:15.750174 7 log.go:172] (0xc001416580) Data frame received for 5 I0603 00:56:15.750193 7 log.go:172] (0xc0026f6640) (5) Data frame handling I0603 00:56:15.752231 7 log.go:172] (0xc001416580) Data frame received for 1 I0603 00:56:15.752250 7 log.go:172] (0xc0026f65a0) (1) Data frame handling I0603 00:56:15.752264 7 log.go:172] (0xc0026f65a0) (1) Data frame sent I0603 00:56:15.752277 7 log.go:172] (0xc001416580) (0xc0026f65a0) Stream removed, broadcasting: 1 I0603 00:56:15.752297 7 log.go:172] (0xc001416580) Go away received I0603 00:56:15.752423 7 log.go:172] (0xc001416580) (0xc0026f65a0) Stream removed, broadcasting: 1 I0603 00:56:15.752440 7 log.go:172] (0xc001416580) (0xc002548640) Stream removed, broadcasting: 3 I0603 00:56:15.752448 7 log.go:172] (0xc001416580) (0xc0026f6640) Stream removed, broadcasting: 5 Jun 3 00:56:15.752: INFO: Pod exec output: / STEP: Waiting for container to restart Jun 3 00:56:15.755: INFO: Container dapi-container, restarts: 0 Jun 3 00:56:25.760: INFO: Container dapi-container, restarts: 0 Jun 3 00:56:35.760: INFO: Container dapi-container, restarts: 0 Jun 3 00:56:45.783: INFO: Container dapi-container, restarts: 0 Jun 3 00:56:55.760: INFO: Container dapi-container, restarts: 1 Jun 3 00:56:55.760: INFO: Container has restart count: 1 STEP: Rewriting the file Jun 3 00:56:55.760: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-7781 PodName:var-expansion-f1cae8b2-39cf-4b82-8ccc-aa5ff13c2170 ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 00:56:55.760: INFO: >>> kubeConfig: /root/.kube/config I0603 00:56:55.816480 7 log.go:172] (0xc00278fce0) (0xc0010315e0) Create stream I0603 00:56:55.816512 7 log.go:172] (0xc00278fce0) (0xc0010315e0) Stream added, broadcasting: 1 I0603 00:56:55.818055 7 log.go:172] (0xc00278fce0) Reply frame received for 1 I0603 00:56:55.818085 7 log.go:172] (0xc00278fce0) (0xc001930000) Create stream I0603 00:56:55.818095 7 log.go:172] (0xc00278fce0) (0xc001930000) Stream added, broadcasting: 3 I0603 00:56:55.818594 7 log.go:172] (0xc00278fce0) Reply frame received for 3 I0603 00:56:55.818615 7 log.go:172] (0xc00278fce0) (0xc001930280) Create stream I0603 00:56:55.818624 7 log.go:172] (0xc00278fce0) (0xc001930280) Stream added, broadcasting: 5 I0603 00:56:55.819090 7 log.go:172] (0xc00278fce0) Reply frame received for 5 I0603 00:56:55.878187 7 log.go:172] (0xc00278fce0) Data frame received for 5 I0603 00:56:55.878242 7 log.go:172] (0xc00278fce0) Data frame received for 3 I0603 00:56:55.878395 7 log.go:172] (0xc001930000) (3) Data frame handling I0603 00:56:55.878448 7 log.go:172] (0xc001930280) (5) Data frame handling I0603 00:56:55.880107 7 log.go:172] (0xc00278fce0) Data frame received for 1 I0603 00:56:55.880124 7 log.go:172] (0xc0010315e0) (1) Data frame handling I0603 00:56:55.880149 7 log.go:172] (0xc0010315e0) (1) Data frame sent I0603 00:56:55.880166 7 log.go:172] (0xc00278fce0) (0xc0010315e0) Stream removed, broadcasting: 1 I0603 00:56:55.880198 7 log.go:172] (0xc00278fce0) Go away received I0603 00:56:55.880299 7 log.go:172] (0xc00278fce0) (0xc0010315e0) Stream removed, broadcasting: 1 I0603 00:56:55.880334 7 log.go:172] (0xc00278fce0) (0xc001930000) Stream removed, broadcasting: 3 I0603 00:56:55.880352 7 log.go:172] (0xc00278fce0) (0xc001930280) Stream removed, broadcasting: 5 Jun 3 00:56:55.880: INFO: Exec stderr: "" Jun 3 00:56:55.880: INFO: Pod exec output: STEP: Waiting for container to stop restarting Jun 3 00:57:23.890: INFO: Container has restart count: 2 Jun 3 00:58:25.889: INFO: Container restart has stabilized STEP: test for subpath mounted with old value Jun 3 00:58:25.892: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-7781 PodName:var-expansion-f1cae8b2-39cf-4b82-8ccc-aa5ff13c2170 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 00:58:25.892: INFO: >>> kubeConfig: /root/.kube/config I0603 00:58:25.929390 7 log.go:172] (0xc00278fb80) (0xc000b434a0) Create stream I0603 00:58:25.929418 7 log.go:172] (0xc00278fb80) (0xc000b434a0) Stream added, broadcasting: 1 I0603 00:58:25.931204 7 log.go:172] (0xc00278fb80) Reply frame received for 1 I0603 00:58:25.931239 7 log.go:172] (0xc00278fb80) (0xc0026f61e0) Create stream I0603 00:58:25.931252 7 log.go:172] (0xc00278fb80) (0xc0026f61e0) Stream added, broadcasting: 3 I0603 00:58:25.932208 7 log.go:172] (0xc00278fb80) Reply frame received for 3 I0603 00:58:25.932232 7 log.go:172] (0xc00278fb80) (0xc0026f6320) Create stream I0603 00:58:25.932239 7 log.go:172] (0xc00278fb80) (0xc0026f6320) Stream added, broadcasting: 5 I0603 00:58:25.933418 7 log.go:172] (0xc00278fb80) Reply frame received for 5 I0603 00:58:26.031696 7 log.go:172] (0xc00278fb80) Data frame received for 3 I0603 00:58:26.031724 7 log.go:172] (0xc0026f61e0) (3) Data frame handling I0603 00:58:26.032073 7 log.go:172] (0xc00278fb80) Data frame received for 5 I0603 00:58:26.032121 7 log.go:172] (0xc0026f6320) (5) Data frame handling I0603 00:58:26.033103 7 log.go:172] (0xc00278fb80) Data frame received for 1 I0603 00:58:26.033279 7 log.go:172] (0xc000b434a0) (1) Data frame handling I0603 00:58:26.033301 7 log.go:172] (0xc000b434a0) (1) Data frame sent I0603 00:58:26.033456 7 log.go:172] (0xc00278fb80) (0xc000b434a0) Stream removed, broadcasting: 1 I0603 00:58:26.033483 7 log.go:172] (0xc00278fb80) Go away received I0603 00:58:26.033564 7 log.go:172] (0xc00278fb80) (0xc000b434a0) Stream removed, broadcasting: 1 I0603 00:58:26.033598 7 log.go:172] (0xc00278fb80) (0xc0026f61e0) Stream removed, broadcasting: 3 I0603 00:58:26.033621 7 log.go:172] (0xc00278fb80) (0xc0026f6320) Stream removed, broadcasting: 5 Jun 3 00:58:26.037: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-7781 PodName:var-expansion-f1cae8b2-39cf-4b82-8ccc-aa5ff13c2170 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 00:58:26.037: INFO: >>> kubeConfig: /root/.kube/config I0603 00:58:26.070152 7 log.go:172] (0xc000fa60b0) (0xc002548140) Create stream I0603 00:58:26.070181 7 log.go:172] (0xc000fa60b0) (0xc002548140) Stream added, broadcasting: 1 I0603 00:58:26.072267 7 log.go:172] (0xc000fa60b0) Reply frame received for 1 I0603 00:58:26.072292 7 log.go:172] (0xc000fa60b0) (0xc00158cfa0) Create stream I0603 00:58:26.072299 7 log.go:172] (0xc000fa60b0) (0xc00158cfa0) Stream added, broadcasting: 3 I0603 00:58:26.073543 7 log.go:172] (0xc000fa60b0) Reply frame received for 3 I0603 00:58:26.073591 7 log.go:172] (0xc000fa60b0) (0xc002548280) Create stream I0603 00:58:26.073612 7 log.go:172] (0xc000fa60b0) (0xc002548280) Stream added, broadcasting: 5 I0603 00:58:26.074744 7 log.go:172] (0xc000fa60b0) Reply frame received for 5 I0603 00:58:26.134122 7 log.go:172] (0xc000fa60b0) Data frame received for 3 I0603 00:58:26.134146 7 log.go:172] (0xc00158cfa0) (3) Data frame handling I0603 00:58:26.134179 7 log.go:172] (0xc000fa60b0) Data frame received for 5 I0603 00:58:26.134208 7 log.go:172] (0xc002548280) (5) Data frame handling I0603 00:58:26.135971 7 log.go:172] (0xc000fa60b0) Data frame received for 1 I0603 00:58:26.136015 7 log.go:172] (0xc002548140) (1) Data frame handling I0603 00:58:26.136039 7 log.go:172] (0xc002548140) (1) Data frame sent I0603 00:58:26.136056 7 log.go:172] (0xc000fa60b0) (0xc002548140) Stream removed, broadcasting: 1 I0603 00:58:26.136075 7 log.go:172] (0xc000fa60b0) Go away received I0603 00:58:26.136202 7 log.go:172] (0xc000fa60b0) (0xc002548140) Stream removed, broadcasting: 1 I0603 00:58:26.136236 7 log.go:172] (0xc000fa60b0) (0xc00158cfa0) Stream removed, broadcasting: 3 I0603 00:58:26.136248 7 log.go:172] (0xc000fa60b0) (0xc002548280) Stream removed, broadcasting: 5 Jun 3 00:58:26.136: INFO: Deleting pod "var-expansion-f1cae8b2-39cf-4b82-8ccc-aa5ff13c2170" in namespace "var-expansion-7781" Jun 3 00:58:26.142: INFO: Wait up to 5m0s for pod "var-expansion-f1cae8b2-39cf-4b82-8ccc-aa5ff13c2170" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:59:06.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7781" for this suite. • [SLOW TEST:179.261 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":269,"skipped":4287,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:59:06.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3190.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3190.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3190.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3190.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3190.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3190.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 00:59:12.405: INFO: DNS probes using dns-3190/dns-test-0b829687-aa94-4d5b-97c0-c892cfba455c succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 00:59:12.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3190" for this suite. • [SLOW TEST:6.321 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":270,"skipped":4335,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 00:59:12.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 00:59:12.679: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 00:59:12.746: INFO: Waiting for terminating namespaces to be deleted... Jun 3 00:59:12.748: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jun 3 00:59:12.754: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jun 3 00:59:12.754: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jun 3 00:59:12.754: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jun 3 00:59:12.754: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jun 3 00:59:12.754: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 3 00:59:12.754: INFO: Container kindnet-cni ready: true, restart count 2 Jun 3 00:59:12.754: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 3 00:59:12.754: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 00:59:12.754: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jun 3 00:59:12.759: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jun 3 00:59:12.759: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jun 3 00:59:12.759: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jun 3 00:59:12.759: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jun 3 00:59:12.759: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 3 00:59:12.759: INFO: Container kindnet-cni ready: true, restart count 2 Jun 3 00:59:12.759: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 3 00:59:12.759: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a6b54337-f9a5-490f-9121-b55632d0e72e 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-a6b54337-f9a5-490f-9121-b55632d0e72e off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-a6b54337-f9a5-490f-9121-b55632d0e72e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 01:04:20.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9182" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.432 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":271,"skipped":4361,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 01:04:20.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3226 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-3226 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3226 Jun 3 01:04:21.697: INFO: Found 0 stateful pods, waiting for 1 Jun 3 01:04:31.703: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 3 01:04:31.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3226 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 3 01:04:34.716: INFO: stderr: "I0603 01:04:34.590889 3709 log.go:172] (0xc0000e8370) (0xc00066ee60) Create stream\nI0603 01:04:34.590936 3709 log.go:172] (0xc0000e8370) (0xc00066ee60) Stream added, broadcasting: 1\nI0603 01:04:34.594413 3709 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0603 01:04:34.594559 3709 log.go:172] (0xc0000e8370) (0xc0006665a0) Create stream\nI0603 01:04:34.594790 3709 log.go:172] (0xc0000e8370) (0xc0006665a0) Stream added, broadcasting: 3\nI0603 01:04:34.598847 3709 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0603 01:04:34.598890 3709 log.go:172] (0xc0000e8370) (0xc000666f00) Create stream\nI0603 01:04:34.598901 3709 log.go:172] (0xc0000e8370) (0xc000666f00) Stream added, broadcasting: 5\nI0603 01:04:34.599691 3709 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0603 01:04:34.679957 3709 log.go:172] (0xc0000e8370) Data frame received for 5\nI0603 01:04:34.679980 3709 log.go:172] (0xc000666f00) (5) Data frame handling\nI0603 01:04:34.679993 3709 log.go:172] (0xc000666f00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0603 01:04:34.707229 3709 log.go:172] (0xc0000e8370) Data frame received for 3\nI0603 01:04:34.707244 3709 log.go:172] (0xc0006665a0) (3) Data frame handling\nI0603 01:04:34.707251 3709 log.go:172] (0xc0006665a0) (3) Data frame sent\nI0603 01:04:34.707655 3709 log.go:172] (0xc0000e8370) Data frame received for 3\nI0603 01:04:34.707670 3709 log.go:172] (0xc0006665a0) (3) Data frame handling\nI0603 01:04:34.707955 3709 log.go:172] (0xc0000e8370) Data frame received for 5\nI0603 01:04:34.707998 3709 log.go:172] (0xc000666f00) (5) Data frame handling\nI0603 01:04:34.710058 3709 log.go:172] (0xc0000e8370) Data frame received for 1\nI0603 01:04:34.710086 3709 log.go:172] (0xc00066ee60) (1) Data frame handling\nI0603 01:04:34.710106 3709 log.go:172] (0xc00066ee60) (1) Data frame sent\nI0603 01:04:34.710130 3709 log.go:172] (0xc0000e8370) (0xc00066ee60) Stream removed, broadcasting: 1\nI0603 01:04:34.710157 3709 log.go:172] (0xc0000e8370) Go away received\nI0603 01:04:34.710660 3709 log.go:172] (0xc0000e8370) (0xc00066ee60) Stream removed, broadcasting: 1\nI0603 01:04:34.710683 3709 log.go:172] (0xc0000e8370) (0xc0006665a0) Stream removed, broadcasting: 3\nI0603 01:04:34.710694 3709 log.go:172] (0xc0000e8370) (0xc000666f00) Stream removed, broadcasting: 5\n" Jun 3 01:04:34.716: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 3 01:04:34.716: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 3 01:04:34.720: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 3 01:04:44.725: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 3 01:04:44.725: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 01:04:44.752: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 01:04:44.752: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC }] Jun 3 01:04:44.752: INFO: Jun 3 01:04:44.752: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 3 01:04:45.758: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98363791s Jun 3 01:04:47.130: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.977699758s Jun 3 01:04:48.141: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.605934085s Jun 3 01:04:49.146: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.594024251s Jun 3 01:04:50.151: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.589355093s Jun 3 01:04:51.156: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.58423834s Jun 3 01:04:52.162: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.579389072s Jun 3 01:04:53.166: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.573918555s Jun 3 01:04:54.172: INFO: Verifying statefulset ss doesn't scale past 3 for another 569.051638ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3226 Jun 3 01:04:55.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3226 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 3 01:04:55.426: INFO: stderr: "I0603 01:04:55.347085 3743 log.go:172] (0xc0006f2000) (0xc000452f00) Create stream\nI0603 01:04:55.347180 3743 log.go:172] (0xc0006f2000) (0xc000452f00) Stream added, broadcasting: 1\nI0603 01:04:55.349053 3743 log.go:172] (0xc0006f2000) Reply frame received for 1\nI0603 01:04:55.349092 3743 log.go:172] (0xc0006f2000) (0xc0005fefa0) Create stream\nI0603 01:04:55.349104 3743 log.go:172] (0xc0006f2000) (0xc0005fefa0) Stream added, broadcasting: 3\nI0603 01:04:55.350316 3743 log.go:172] (0xc0006f2000) Reply frame received for 3\nI0603 01:04:55.350360 3743 log.go:172] (0xc0006f2000) (0xc00041e640) Create stream\nI0603 01:04:55.350372 3743 log.go:172] (0xc0006f2000) (0xc00041e640) Stream added, broadcasting: 5\nI0603 01:04:55.351444 3743 log.go:172] (0xc0006f2000) Reply frame received for 5\nI0603 01:04:55.419608 3743 log.go:172] (0xc0006f2000) Data frame received for 3\nI0603 01:04:55.419645 3743 log.go:172] (0xc0005fefa0) (3) Data frame handling\nI0603 01:04:55.419657 3743 log.go:172] (0xc0005fefa0) (3) Data frame sent\nI0603 01:04:55.419665 3743 log.go:172] (0xc0006f2000) Data frame received for 3\nI0603 01:04:55.419673 3743 log.go:172] (0xc0005fefa0) (3) Data frame handling\nI0603 01:04:55.419718 3743 log.go:172] (0xc0006f2000) Data frame received for 5\nI0603 01:04:55.419758 3743 log.go:172] (0xc00041e640) (5) Data frame handling\nI0603 01:04:55.419782 3743 log.go:172] (0xc00041e640) (5) Data frame sent\nI0603 01:04:55.419803 3743 log.go:172] (0xc0006f2000) Data frame received for 5\nI0603 01:04:55.419815 3743 log.go:172] (0xc00041e640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0603 01:04:55.421288 3743 log.go:172] (0xc0006f2000) Data frame received for 1\nI0603 01:04:55.421310 3743 log.go:172] (0xc000452f00) (1) Data frame handling\nI0603 01:04:55.421328 3743 log.go:172] (0xc000452f00) (1) Data frame sent\nI0603 01:04:55.421352 3743 log.go:172] (0xc0006f2000) (0xc000452f00) Stream removed, broadcasting: 1\nI0603 01:04:55.421386 3743 log.go:172] (0xc0006f2000) Go away received\nI0603 01:04:55.421782 3743 log.go:172] (0xc0006f2000) (0xc000452f00) Stream removed, broadcasting: 1\nI0603 01:04:55.421804 3743 log.go:172] (0xc0006f2000) (0xc0005fefa0) Stream removed, broadcasting: 3\nI0603 01:04:55.421815 3743 log.go:172] (0xc0006f2000) (0xc00041e640) Stream removed, broadcasting: 5\n" Jun 3 01:04:55.427: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 3 01:04:55.427: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 3 01:04:55.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3226 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 3 01:04:55.658: INFO: stderr: "I0603 01:04:55.563098 3763 log.go:172] (0xc000528b00) (0xc00052d5e0) Create stream\nI0603 01:04:55.563150 3763 log.go:172] (0xc000528b00) (0xc00052d5e0) Stream added, broadcasting: 1\nI0603 01:04:55.566035 3763 log.go:172] (0xc000528b00) Reply frame received for 1\nI0603 01:04:55.566097 3763 log.go:172] (0xc000528b00) (0xc00052dae0) Create stream\nI0603 01:04:55.566117 3763 log.go:172] (0xc000528b00) (0xc00052dae0) Stream added, broadcasting: 3\nI0603 01:04:55.567149 3763 log.go:172] (0xc000528b00) Reply frame received for 3\nI0603 01:04:55.567203 3763 log.go:172] (0xc000528b00) (0xc000538500) Create stream\nI0603 01:04:55.567240 3763 log.go:172] (0xc000528b00) (0xc000538500) Stream added, broadcasting: 5\nI0603 01:04:55.568326 3763 log.go:172] (0xc000528b00) Reply frame received for 5\nI0603 01:04:55.621862 3763 log.go:172] (0xc000528b00) Data frame received for 5\nI0603 01:04:55.621893 3763 log.go:172] (0xc000538500) (5) Data frame handling\nI0603 01:04:55.621915 3763 log.go:172] (0xc000538500) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0603 01:04:55.648376 3763 log.go:172] (0xc000528b00) Data frame received for 5\nI0603 01:04:55.648418 3763 log.go:172] (0xc000538500) (5) Data frame handling\nI0603 01:04:55.648453 3763 log.go:172] (0xc000538500) (5) Data frame sent\nI0603 01:04:55.648479 3763 log.go:172] (0xc000528b00) Data frame received for 5\nI0603 01:04:55.648504 3763 log.go:172] (0xc000538500) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0603 01:04:55.648547 3763 log.go:172] (0xc000538500) (5) Data frame sent\nI0603 01:04:55.648605 3763 log.go:172] (0xc000528b00) Data frame received for 3\nI0603 01:04:55.648621 3763 log.go:172] (0xc00052dae0) (3) Data frame handling\nI0603 01:04:55.648631 3763 log.go:172] (0xc00052dae0) (3) Data frame sent\nI0603 01:04:55.649374 3763 log.go:172] (0xc000528b00) Data frame received for 3\nI0603 01:04:55.649407 3763 log.go:172] (0xc00052dae0) (3) Data frame handling\nI0603 01:04:55.649434 3763 log.go:172] (0xc000528b00) Data frame received for 5\nI0603 01:04:55.649443 3763 log.go:172] (0xc000538500) (5) Data frame handling\nI0603 01:04:55.650889 3763 log.go:172] (0xc000528b00) Data frame received for 1\nI0603 01:04:55.650900 3763 log.go:172] (0xc00052d5e0) (1) Data frame handling\nI0603 01:04:55.650908 3763 log.go:172] (0xc00052d5e0) (1) Data frame sent\nI0603 01:04:55.650916 3763 log.go:172] (0xc000528b00) (0xc00052d5e0) Stream removed, broadcasting: 1\nI0603 01:04:55.651160 3763 log.go:172] (0xc000528b00) (0xc00052d5e0) Stream removed, broadcasting: 1\nI0603 01:04:55.651174 3763 log.go:172] (0xc000528b00) (0xc00052dae0) Stream removed, broadcasting: 3\nI0603 01:04:55.651234 3763 log.go:172] (0xc000528b00) Go away received\nI0603 01:04:55.651297 3763 log.go:172] (0xc000528b00) (0xc000538500) Stream removed, broadcasting: 5\n" Jun 3 01:04:55.658: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 3 01:04:55.658: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 3 01:04:55.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3226 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 3 01:04:55.863: INFO: stderr: "I0603 01:04:55.787859 3785 log.go:172] (0xc0006842c0) (0xc000139680) Create stream\nI0603 01:04:55.787910 3785 log.go:172] (0xc0006842c0) (0xc000139680) Stream added, broadcasting: 1\nI0603 01:04:55.790566 3785 log.go:172] (0xc0006842c0) Reply frame received for 1\nI0603 01:04:55.790603 3785 log.go:172] (0xc0006842c0) (0xc00023c640) Create stream\nI0603 01:04:55.790614 3785 log.go:172] (0xc0006842c0) (0xc00023c640) Stream added, broadcasting: 3\nI0603 01:04:55.791569 3785 log.go:172] (0xc0006842c0) Reply frame received for 3\nI0603 01:04:55.791614 3785 log.go:172] (0xc0006842c0) (0xc0003d7f40) Create stream\nI0603 01:04:55.791630 3785 log.go:172] (0xc0006842c0) (0xc0003d7f40) Stream added, broadcasting: 5\nI0603 01:04:55.792437 3785 log.go:172] (0xc0006842c0) Reply frame received for 5\nI0603 01:04:55.855082 3785 log.go:172] (0xc0006842c0) Data frame received for 3\nI0603 01:04:55.855136 3785 log.go:172] (0xc0006842c0) Data frame received for 5\nI0603 01:04:55.855177 3785 log.go:172] (0xc0003d7f40) (5) Data frame handling\nI0603 01:04:55.855193 3785 log.go:172] (0xc0003d7f40) (5) Data frame sent\nI0603 01:04:55.855202 3785 log.go:172] (0xc0006842c0) Data frame received for 5\nI0603 01:04:55.855212 3785 log.go:172] (0xc0003d7f40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0603 01:04:55.855245 3785 log.go:172] (0xc00023c640) (3) Data frame handling\nI0603 01:04:55.855257 3785 log.go:172] (0xc00023c640) (3) Data frame sent\nI0603 01:04:55.855266 3785 log.go:172] (0xc0006842c0) Data frame received for 3\nI0603 01:04:55.855277 3785 log.go:172] (0xc00023c640) (3) Data frame handling\nI0603 01:04:55.856827 3785 log.go:172] (0xc0006842c0) Data frame received for 1\nI0603 01:04:55.856847 3785 log.go:172] (0xc000139680) (1) Data frame handling\nI0603 01:04:55.856858 3785 log.go:172] (0xc000139680) (1) Data frame sent\nI0603 01:04:55.856869 3785 log.go:172] (0xc0006842c0) (0xc000139680) Stream removed, broadcasting: 1\nI0603 01:04:55.856945 3785 log.go:172] (0xc0006842c0) Go away received\nI0603 01:04:55.857267 3785 log.go:172] (0xc0006842c0) (0xc000139680) Stream removed, broadcasting: 1\nI0603 01:04:55.857287 3785 log.go:172] (0xc0006842c0) (0xc00023c640) Stream removed, broadcasting: 3\nI0603 01:04:55.857298 3785 log.go:172] (0xc0006842c0) (0xc0003d7f40) Stream removed, broadcasting: 5\n" Jun 3 01:04:55.863: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 3 01:04:55.863: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 3 01:04:55.867: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 01:04:55.867: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 01:04:55.867: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 3 01:04:55.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3226 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 3 01:04:56.084: INFO: stderr: "I0603 01:04:55.993872 3807 log.go:172] (0xc0009ba000) (0xc0005485a0) Create stream\nI0603 01:04:55.993956 3807 log.go:172] (0xc0009ba000) (0xc0005485a0) Stream added, broadcasting: 1\nI0603 01:04:55.996979 3807 log.go:172] (0xc0009ba000) Reply frame received for 1\nI0603 01:04:55.997025 3807 log.go:172] (0xc0009ba000) (0xc0005499a0) Create stream\nI0603 01:04:55.997040 3807 log.go:172] (0xc0009ba000) (0xc0005499a0) Stream added, broadcasting: 3\nI0603 01:04:55.998293 3807 log.go:172] (0xc0009ba000) Reply frame received for 3\nI0603 01:04:55.998332 3807 log.go:172] (0xc0009ba000) (0xc000532280) Create stream\nI0603 01:04:55.998349 3807 log.go:172] (0xc0009ba000) (0xc000532280) Stream added, broadcasting: 5\nI0603 01:04:55.999269 3807 log.go:172] (0xc0009ba000) Reply frame received for 5\nI0603 01:04:56.075015 3807 log.go:172] (0xc0009ba000) Data frame received for 5\nI0603 01:04:56.075061 3807 log.go:172] (0xc000532280) (5) Data frame handling\nI0603 01:04:56.075081 3807 log.go:172] (0xc000532280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0603 01:04:56.075102 3807 log.go:172] (0xc0009ba000) Data frame received for 3\nI0603 01:04:56.075111 3807 log.go:172] (0xc0005499a0) (3) Data frame handling\nI0603 01:04:56.075128 3807 log.go:172] (0xc0005499a0) (3) Data frame sent\nI0603 01:04:56.075162 3807 log.go:172] (0xc0009ba000) Data frame received for 3\nI0603 01:04:56.075188 3807 log.go:172] (0xc0005499a0) (3) Data frame handling\nI0603 01:04:56.075214 3807 log.go:172] (0xc0009ba000) Data frame received for 5\nI0603 01:04:56.075244 3807 log.go:172] (0xc000532280) (5) Data frame handling\nI0603 01:04:56.077032 3807 log.go:172] (0xc0009ba000) Data frame received for 1\nI0603 01:04:56.077058 3807 log.go:172] (0xc0005485a0) (1) Data frame handling\nI0603 01:04:56.077104 3807 log.go:172] (0xc0005485a0) (1) Data frame sent\nI0603 01:04:56.077319 3807 log.go:172] (0xc0009ba000) (0xc0005485a0) Stream removed, broadcasting: 1\nI0603 01:04:56.077508 3807 log.go:172] (0xc0009ba000) Go away received\nI0603 01:04:56.077750 3807 log.go:172] (0xc0009ba000) (0xc0005485a0) Stream removed, broadcasting: 1\nI0603 01:04:56.077775 3807 log.go:172] (0xc0009ba000) (0xc0005499a0) Stream removed, broadcasting: 3\nI0603 01:04:56.077794 3807 log.go:172] (0xc0009ba000) (0xc000532280) Stream removed, broadcasting: 5\n" Jun 3 01:04:56.084: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 3 01:04:56.084: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 3 01:04:56.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3226 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 3 01:04:56.338: INFO: stderr: "I0603 01:04:56.225519 3828 log.go:172] (0xc000be3550) (0xc000af2460) Create stream\nI0603 01:04:56.225582 3828 log.go:172] (0xc000be3550) (0xc000af2460) Stream added, broadcasting: 1\nI0603 01:04:56.230177 3828 log.go:172] (0xc000be3550) Reply frame received for 1\nI0603 01:04:56.230218 3828 log.go:172] (0xc000be3550) (0xc00071af00) Create stream\nI0603 01:04:56.230229 3828 log.go:172] (0xc000be3550) (0xc00071af00) Stream added, broadcasting: 3\nI0603 01:04:56.231141 3828 log.go:172] (0xc000be3550) Reply frame received for 3\nI0603 01:04:56.231183 3828 log.go:172] (0xc000be3550) (0xc0006ead20) Create stream\nI0603 01:04:56.231199 3828 log.go:172] (0xc000be3550) (0xc0006ead20) Stream added, broadcasting: 5\nI0603 01:04:56.232028 3828 log.go:172] (0xc000be3550) Reply frame received for 5\nI0603 01:04:56.294186 3828 log.go:172] (0xc000be3550) Data frame received for 5\nI0603 01:04:56.294213 3828 log.go:172] (0xc0006ead20) (5) Data frame handling\nI0603 01:04:56.294229 3828 log.go:172] (0xc0006ead20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0603 01:04:56.329370 3828 log.go:172] (0xc000be3550) Data frame received for 5\nI0603 01:04:56.329395 3828 log.go:172] (0xc0006ead20) (5) Data frame handling\nI0603 01:04:56.329415 3828 log.go:172] (0xc000be3550) Data frame received for 3\nI0603 01:04:56.329434 3828 log.go:172] (0xc00071af00) (3) Data frame handling\nI0603 01:04:56.329444 3828 log.go:172] (0xc00071af00) (3) Data frame sent\nI0603 01:04:56.329452 3828 log.go:172] (0xc000be3550) Data frame received for 3\nI0603 01:04:56.329459 3828 log.go:172] (0xc00071af00) (3) Data frame handling\nI0603 01:04:56.330979 3828 log.go:172] (0xc000be3550) Data frame received for 1\nI0603 01:04:56.331006 3828 log.go:172] (0xc000af2460) (1) Data frame handling\nI0603 01:04:56.331053 3828 log.go:172] (0xc000af2460) (1) Data frame sent\nI0603 01:04:56.331073 3828 log.go:172] (0xc000be3550) (0xc000af2460) Stream removed, broadcasting: 1\nI0603 01:04:56.331094 3828 log.go:172] (0xc000be3550) Go away received\nI0603 01:04:56.331334 3828 log.go:172] (0xc000be3550) (0xc000af2460) Stream removed, broadcasting: 1\nI0603 01:04:56.331347 3828 log.go:172] (0xc000be3550) (0xc00071af00) Stream removed, broadcasting: 3\nI0603 01:04:56.331353 3828 log.go:172] (0xc000be3550) (0xc0006ead20) Stream removed, broadcasting: 5\n" Jun 3 01:04:56.338: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 3 01:04:56.338: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 3 01:04:56.338: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3226 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 3 01:04:56.602: INFO: stderr: "I0603 01:04:56.457499 3850 log.go:172] (0xc0006893f0) (0xc0009c23c0) Create stream\nI0603 01:04:56.457556 3850 log.go:172] (0xc0006893f0) (0xc0009c23c0) Stream added, broadcasting: 1\nI0603 01:04:56.462292 3850 log.go:172] (0xc0006893f0) Reply frame received for 1\nI0603 01:04:56.462333 3850 log.go:172] (0xc0006893f0) (0xc00060a640) Create stream\nI0603 01:04:56.462342 3850 log.go:172] (0xc0006893f0) (0xc00060a640) Stream added, broadcasting: 3\nI0603 01:04:56.463116 3850 log.go:172] (0xc0006893f0) Reply frame received for 3\nI0603 01:04:56.463139 3850 log.go:172] (0xc0006893f0) (0xc00060ab40) Create stream\nI0603 01:04:56.463147 3850 log.go:172] (0xc0006893f0) (0xc00060ab40) Stream added, broadcasting: 5\nI0603 01:04:56.463822 3850 log.go:172] (0xc0006893f0) Reply frame received for 5\nI0603 01:04:56.543409 3850 log.go:172] (0xc0006893f0) Data frame received for 5\nI0603 01:04:56.543439 3850 log.go:172] (0xc00060ab40) (5) Data frame handling\nI0603 01:04:56.543461 3850 log.go:172] (0xc00060ab40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0603 01:04:56.592083 3850 log.go:172] (0xc0006893f0) Data frame received for 3\nI0603 01:04:56.592178 3850 log.go:172] (0xc00060a640) (3) Data frame handling\nI0603 01:04:56.592208 3850 log.go:172] (0xc00060a640) (3) Data frame sent\nI0603 01:04:56.592220 3850 log.go:172] (0xc0006893f0) Data frame received for 3\nI0603 01:04:56.592224 3850 log.go:172] (0xc00060a640) (3) Data frame handling\nI0603 01:04:56.592585 3850 log.go:172] (0xc0006893f0) Data frame received for 5\nI0603 01:04:56.592628 3850 log.go:172] (0xc00060ab40) (5) Data frame handling\nI0603 01:04:56.594895 3850 log.go:172] (0xc0006893f0) Data frame received for 1\nI0603 01:04:56.594932 3850 log.go:172] (0xc0009c23c0) (1) Data frame handling\nI0603 01:04:56.594955 3850 log.go:172] (0xc0009c23c0) (1) Data frame sent\nI0603 01:04:56.594991 3850 log.go:172] (0xc0006893f0) (0xc0009c23c0) Stream removed, broadcasting: 1\nI0603 01:04:56.595025 3850 log.go:172] (0xc0006893f0) Go away received\nI0603 01:04:56.595484 3850 log.go:172] (0xc0006893f0) (0xc0009c23c0) Stream removed, broadcasting: 1\nI0603 01:04:56.595512 3850 log.go:172] (0xc0006893f0) (0xc00060a640) Stream removed, broadcasting: 3\nI0603 01:04:56.595526 3850 log.go:172] (0xc0006893f0) (0xc00060ab40) Stream removed, broadcasting: 5\n" Jun 3 01:04:56.602: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 3 01:04:56.602: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 3 01:04:56.602: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 01:04:56.607: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 3 01:05:06.615: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 3 01:05:06.615: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 3 01:05:06.615: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 3 01:05:06.674: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 01:05:06.674: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC }] Jun 3 01:05:06.674: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC }] Jun 3 01:05:06.674: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC }] Jun 3 01:05:06.675: INFO: Jun 3 01:05:06.675: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 3 01:05:07.756: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 01:05:07.756: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC }] Jun 3 01:05:07.756: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC }] Jun 3 01:05:07.756: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC }] Jun 3 01:05:07.756: INFO: Jun 3 01:05:07.756: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 3 01:05:08.763: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 01:05:08.763: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC }] Jun 3 01:05:08.763: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC }] Jun 3 01:05:08.763: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC }] Jun 3 01:05:08.763: INFO: Jun 3 01:05:08.763: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 3 01:05:09.767: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 01:05:09.767: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC }] Jun 3 01:05:09.768: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC }] Jun 3 01:05:09.768: INFO: Jun 3 01:05:09.768: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 3 01:05:10.773: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 01:05:10.774: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC }] Jun 3 01:05:10.774: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC }] Jun 3 01:05:10.774: INFO: Jun 3 01:05:10.774: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 3 01:05:11.778: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 01:05:11.778: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC }] Jun 3 01:05:11.778: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC }] Jun 3 01:05:11.779: INFO: Jun 3 01:05:11.779: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 3 01:05:12.784: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 01:05:12.784: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC }] Jun 3 01:05:12.784: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC }] Jun 3 01:05:12.784: INFO: Jun 3 01:05:12.784: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 3 01:05:13.790: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 01:05:13.790: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC }] Jun 3 01:05:13.790: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC }] Jun 3 01:05:13.790: INFO: Jun 3 01:05:13.790: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 3 01:05:14.797: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 01:05:14.797: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:21 +0000 UTC }] Jun 3 01:05:14.797: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 01:04:44 +0000 UTC }] Jun 3 01:05:14.797: INFO: Jun 3 01:05:14.797: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 3 01:05:15.801: INFO: Verifying statefulset ss doesn't scale past 0 for another 825.58276ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3226 Jun 3 01:05:16.806: INFO: Scaling statefulset ss to 0 Jun 3 01:05:16.817: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 3 01:05:16.820: INFO: Deleting all statefulset in ns statefulset-3226 Jun 3 01:05:16.822: INFO: Scaling statefulset ss to 0 Jun 3 01:05:16.832: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 01:05:16.835: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 01:05:16.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3226" for this suite. • [SLOW TEST:55.914 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":272,"skipped":4363,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 01:05:16.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Jun 3 01:05:16.933: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix516684465/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 01:05:17.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9211" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":273,"skipped":4368,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 01:05:17.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 01:05:23.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6594" for this suite. STEP: Destroying namespace "nsdeletetest-7745" for this suite. Jun 3 01:05:23.321: INFO: Namespace nsdeletetest-7745 was already deleted STEP: Destroying namespace "nsdeletetest-8844" for this suite. • [SLOW TEST:6.306 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":274,"skipped":4382,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 01:05:23.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3545, will wait for the garbage collector to delete the pods Jun 3 01:05:29.480: INFO: Deleting Job.batch foo took: 6.248161ms Jun 3 01:05:29.580: INFO: Terminating Job.batch foo pods took: 100.29464ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 01:06:05.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3545" for this suite. • [SLOW TEST:41.984 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":275,"skipped":4422,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 01:06:05.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1947 STEP: creating service affinity-nodeport in namespace services-1947 STEP: creating replication controller affinity-nodeport in namespace services-1947 I0603 01:06:05.500106 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-1947, replica count: 3 I0603 01:06:08.550556 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 01:06:11.550790 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 01:06:11.561: INFO: Creating new exec pod Jun 3 01:06:16.594: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1947 execpod-affinityhp6bq -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Jun 3 01:06:16.831: INFO: stderr: "I0603 01:06:16.727535 3888 log.go:172] (0xc000b3af20) (0xc000b34500) Create stream\nI0603 01:06:16.727583 3888 log.go:172] (0xc000b3af20) (0xc000b34500) Stream added, broadcasting: 1\nI0603 01:06:16.731030 3888 log.go:172] (0xc000b3af20) Reply frame received for 1\nI0603 01:06:16.731063 3888 log.go:172] (0xc000b3af20) (0xc0006be000) Create stream\nI0603 01:06:16.731072 3888 log.go:172] (0xc000b3af20) (0xc0006be000) Stream added, broadcasting: 3\nI0603 01:06:16.731902 3888 log.go:172] (0xc000b3af20) Reply frame received for 3\nI0603 01:06:16.731925 3888 log.go:172] (0xc000b3af20) (0xc00052c640) Create stream\nI0603 01:06:16.731932 3888 log.go:172] (0xc000b3af20) (0xc00052c640) Stream added, broadcasting: 5\nI0603 01:06:16.732663 3888 log.go:172] (0xc000b3af20) Reply frame received for 5\nI0603 01:06:16.821988 3888 log.go:172] (0xc000b3af20) Data frame received for 5\nI0603 01:06:16.822029 3888 log.go:172] (0xc00052c640) (5) Data frame handling\nI0603 01:06:16.822061 3888 log.go:172] (0xc00052c640) (5) Data frame sent\nI0603 01:06:16.822080 3888 log.go:172] (0xc000b3af20) Data frame received for 5\nI0603 01:06:16.822099 3888 log.go:172] (0xc00052c640) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0603 01:06:16.822131 3888 log.go:172] (0xc00052c640) (5) Data frame sent\nI0603 01:06:16.822305 3888 log.go:172] (0xc000b3af20) Data frame received for 3\nI0603 01:06:16.822337 3888 log.go:172] (0xc000b3af20) Data frame received for 5\nI0603 01:06:16.822385 3888 log.go:172] (0xc00052c640) (5) Data frame handling\nI0603 01:06:16.822419 3888 log.go:172] (0xc0006be000) (3) Data frame handling\nI0603 01:06:16.824274 3888 log.go:172] (0xc000b3af20) Data frame received for 1\nI0603 01:06:16.824297 3888 log.go:172] (0xc000b34500) (1) Data frame handling\nI0603 01:06:16.824318 3888 log.go:172] (0xc000b34500) (1) Data frame sent\nI0603 01:06:16.824358 3888 log.go:172] (0xc000b3af20) (0xc000b34500) Stream removed, broadcasting: 1\nI0603 01:06:16.824410 3888 log.go:172] (0xc000b3af20) Go away received\nI0603 01:06:16.824682 3888 log.go:172] (0xc000b3af20) (0xc000b34500) Stream removed, broadcasting: 1\nI0603 01:06:16.824699 3888 log.go:172] (0xc000b3af20) (0xc0006be000) Stream removed, broadcasting: 3\nI0603 01:06:16.824706 3888 log.go:172] (0xc000b3af20) (0xc00052c640) Stream removed, broadcasting: 5\n" Jun 3 01:06:16.831: INFO: stdout: "" Jun 3 01:06:16.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1947 execpod-affinityhp6bq -- /bin/sh -x -c nc -zv -t -w 2 10.111.188.9 80' Jun 3 01:06:17.029: INFO: stderr: "I0603 01:06:16.955584 3910 log.go:172] (0xc00003a420) (0xc000532320) Create stream\nI0603 01:06:16.955645 3910 log.go:172] (0xc00003a420) (0xc000532320) Stream added, broadcasting: 1\nI0603 01:06:16.958636 3910 log.go:172] (0xc00003a420) Reply frame received for 1\nI0603 01:06:16.958678 3910 log.go:172] (0xc00003a420) (0xc0005332c0) Create stream\nI0603 01:06:16.958688 3910 log.go:172] (0xc00003a420) (0xc0005332c0) Stream added, broadcasting: 3\nI0603 01:06:16.959712 3910 log.go:172] (0xc00003a420) Reply frame received for 3\nI0603 01:06:16.959748 3910 log.go:172] (0xc00003a420) (0xc0004c2e60) Create stream\nI0603 01:06:16.959765 3910 log.go:172] (0xc00003a420) (0xc0004c2e60) Stream added, broadcasting: 5\nI0603 01:06:16.960566 3910 log.go:172] (0xc00003a420) Reply frame received for 5\nI0603 01:06:17.022733 3910 log.go:172] (0xc00003a420) Data frame received for 3\nI0603 01:06:17.022757 3910 log.go:172] (0xc0005332c0) (3) Data frame handling\nI0603 01:06:17.022783 3910 log.go:172] (0xc00003a420) Data frame received for 5\nI0603 01:06:17.022809 3910 log.go:172] (0xc0004c2e60) (5) Data frame handling\nI0603 01:06:17.022827 3910 log.go:172] (0xc0004c2e60) (5) Data frame sent\nI0603 01:06:17.022843 3910 log.go:172] (0xc00003a420) Data frame received for 5\nI0603 01:06:17.022858 3910 log.go:172] (0xc0004c2e60) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.188.9 80\nConnection to 10.111.188.9 80 port [tcp/http] succeeded!\nI0603 01:06:17.024569 3910 log.go:172] (0xc00003a420) Data frame received for 1\nI0603 01:06:17.024608 3910 log.go:172] (0xc000532320) (1) Data frame handling\nI0603 01:06:17.024639 3910 log.go:172] (0xc000532320) (1) Data frame sent\nI0603 01:06:17.024665 3910 log.go:172] (0xc00003a420) (0xc000532320) Stream removed, broadcasting: 1\nI0603 01:06:17.024708 3910 log.go:172] (0xc00003a420) Go away received\nI0603 01:06:17.024970 3910 log.go:172] (0xc00003a420) (0xc000532320) Stream removed, broadcasting: 1\nI0603 01:06:17.024985 3910 log.go:172] (0xc00003a420) (0xc0005332c0) Stream removed, broadcasting: 3\nI0603 01:06:17.024991 3910 log.go:172] (0xc00003a420) (0xc0004c2e60) Stream removed, broadcasting: 5\n" Jun 3 01:06:17.030: INFO: stdout: "" Jun 3 01:06:17.030: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1947 execpod-affinityhp6bq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31276' Jun 3 01:06:17.233: INFO: stderr: "I0603 01:06:17.151210 3930 log.go:172] (0xc000a80f20) (0xc0007006e0) Create stream\nI0603 01:06:17.151269 3930 log.go:172] (0xc000a80f20) (0xc0007006e0) Stream added, broadcasting: 1\nI0603 01:06:17.156441 3930 log.go:172] (0xc000a80f20) Reply frame received for 1\nI0603 01:06:17.156488 3930 log.go:172] (0xc000a80f20) (0xc0006e95e0) Create stream\nI0603 01:06:17.156506 3930 log.go:172] (0xc000a80f20) (0xc0006e95e0) Stream added, broadcasting: 3\nI0603 01:06:17.157604 3930 log.go:172] (0xc000a80f20) Reply frame received for 3\nI0603 01:06:17.157628 3930 log.go:172] (0xc000a80f20) (0xc00067af00) Create stream\nI0603 01:06:17.157637 3930 log.go:172] (0xc000a80f20) (0xc00067af00) Stream added, broadcasting: 5\nI0603 01:06:17.158510 3930 log.go:172] (0xc000a80f20) Reply frame received for 5\nI0603 01:06:17.226095 3930 log.go:172] (0xc000a80f20) Data frame received for 5\nI0603 01:06:17.226132 3930 log.go:172] (0xc00067af00) (5) Data frame handling\nI0603 01:06:17.226147 3930 log.go:172] (0xc00067af00) (5) Data frame sent\nI0603 01:06:17.226153 3930 log.go:172] (0xc000a80f20) Data frame received for 5\nI0603 01:06:17.226159 3930 log.go:172] (0xc00067af00) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31276\nConnection to 172.17.0.13 31276 port [tcp/31276] succeeded!\nI0603 01:06:17.226179 3930 log.go:172] (0xc000a80f20) Data frame received for 3\nI0603 01:06:17.226194 3930 log.go:172] (0xc0006e95e0) (3) Data frame handling\nI0603 01:06:17.227583 3930 log.go:172] (0xc000a80f20) Data frame received for 1\nI0603 01:06:17.227602 3930 log.go:172] (0xc0007006e0) (1) Data frame handling\nI0603 01:06:17.227611 3930 log.go:172] (0xc0007006e0) (1) Data frame sent\nI0603 01:06:17.227628 3930 log.go:172] (0xc000a80f20) (0xc0007006e0) Stream removed, broadcasting: 1\nI0603 01:06:17.227643 3930 log.go:172] (0xc000a80f20) Go away received\nI0603 01:06:17.227974 3930 log.go:172] (0xc000a80f20) (0xc0007006e0) Stream removed, broadcasting: 1\nI0603 01:06:17.227996 3930 log.go:172] (0xc000a80f20) (0xc0006e95e0) Stream removed, broadcasting: 3\nI0603 01:06:17.228013 3930 log.go:172] (0xc000a80f20) (0xc00067af00) Stream removed, broadcasting: 5\n" Jun 3 01:06:17.234: INFO: stdout: "" Jun 3 01:06:17.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1947 execpod-affinityhp6bq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31276' Jun 3 01:06:17.542: INFO: stderr: "I0603 01:06:17.363293 3951 log.go:172] (0xc0005c4fd0) (0xc000a1a500) Create stream\nI0603 01:06:17.363600 3951 log.go:172] (0xc0005c4fd0) (0xc000a1a500) Stream added, broadcasting: 1\nI0603 01:06:17.368577 3951 log.go:172] (0xc0005c4fd0) Reply frame received for 1\nI0603 01:06:17.368626 3951 log.go:172] (0xc0005c4fd0) (0xc000756aa0) Create stream\nI0603 01:06:17.368645 3951 log.go:172] (0xc0005c4fd0) (0xc000756aa0) Stream added, broadcasting: 3\nI0603 01:06:17.369690 3951 log.go:172] (0xc0005c4fd0) Reply frame received for 3\nI0603 01:06:17.369740 3951 log.go:172] (0xc0005c4fd0) (0xc000528dc0) Create stream\nI0603 01:06:17.369764 3951 log.go:172] (0xc0005c4fd0) (0xc000528dc0) Stream added, broadcasting: 5\nI0603 01:06:17.370564 3951 log.go:172] (0xc0005c4fd0) Reply frame received for 5\nI0603 01:06:17.535092 3951 log.go:172] (0xc0005c4fd0) Data frame received for 3\nI0603 01:06:17.535119 3951 log.go:172] (0xc000756aa0) (3) Data frame handling\nI0603 01:06:17.535155 3951 log.go:172] (0xc0005c4fd0) Data frame received for 5\nI0603 01:06:17.535190 3951 log.go:172] (0xc000528dc0) (5) Data frame handling\nI0603 01:06:17.535208 3951 log.go:172] (0xc000528dc0) (5) Data frame sent\nI0603 01:06:17.535220 3951 log.go:172] (0xc0005c4fd0) Data frame received for 5\nI0603 01:06:17.535226 3951 log.go:172] (0xc000528dc0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31276\nConnection to 172.17.0.12 31276 port [tcp/31276] succeeded!\nI0603 01:06:17.536482 3951 log.go:172] (0xc0005c4fd0) Data frame received for 1\nI0603 01:06:17.536506 3951 log.go:172] (0xc000a1a500) (1) Data frame handling\nI0603 01:06:17.536534 3951 log.go:172] (0xc000a1a500) (1) Data frame sent\nI0603 01:06:17.536556 3951 log.go:172] (0xc0005c4fd0) (0xc000a1a500) Stream removed, broadcasting: 1\nI0603 01:06:17.536762 3951 log.go:172] (0xc0005c4fd0) Go away received\nI0603 01:06:17.536952 3951 log.go:172] (0xc0005c4fd0) (0xc000a1a500) Stream removed, broadcasting: 1\nI0603 01:06:17.536982 3951 log.go:172] (0xc0005c4fd0) (0xc000756aa0) Stream removed, broadcasting: 3\nI0603 01:06:17.536998 3951 log.go:172] (0xc0005c4fd0) (0xc000528dc0) Stream removed, broadcasting: 5\n" Jun 3 01:06:17.542: INFO: stdout: "" Jun 3 01:06:17.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1947 execpod-affinityhp6bq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31276/ ; done' Jun 3 01:06:17.828: INFO: stderr: "I0603 01:06:17.686813 3971 log.go:172] (0xc0005422c0) (0xc0003a34a0) Create stream\nI0603 01:06:17.686862 3971 log.go:172] (0xc0005422c0) (0xc0003a34a0) Stream added, broadcasting: 1\nI0603 01:06:17.689665 3971 log.go:172] (0xc0005422c0) Reply frame received for 1\nI0603 01:06:17.689702 3971 log.go:172] (0xc0005422c0) (0xc0002e8d20) Create stream\nI0603 01:06:17.689713 3971 log.go:172] (0xc0005422c0) (0xc0002e8d20) Stream added, broadcasting: 3\nI0603 01:06:17.690784 3971 log.go:172] (0xc0005422c0) Reply frame received for 3\nI0603 01:06:17.690828 3971 log.go:172] (0xc0005422c0) (0xc0000dd860) Create stream\nI0603 01:06:17.690849 3971 log.go:172] (0xc0005422c0) (0xc0000dd860) Stream added, broadcasting: 5\nI0603 01:06:17.691967 3971 log.go:172] (0xc0005422c0) Reply frame received for 5\nI0603 01:06:17.752207 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.752239 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\nI0603 01:06:17.752250 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31276/\nI0603 01:06:17.752265 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.752272 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.752280 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.757852 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.757958 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.758003 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.758295 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.758326 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.758337 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.758349 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.758356 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\nI0603 01:06:17.758368 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31276/\nI0603 01:06:17.763592 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.763616 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.763631 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.764086 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.764118 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.764135 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.764157 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.764173 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\nI0603 01:06:17.764188 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31276/\nI0603 01:06:17.768239 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.768265 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.768290 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.768657 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.768673 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.768682 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.768694 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.768700 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\nI0603 01:06:17.768707 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31276/\nI0603 01:06:17.772327 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.772343 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.772370 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.772702 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.772727 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\nI0603 01:06:17.772760 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\nI0603 01:06:17.772780 3971 log.go:172] (0xc0005422c0) Data frame received for 5\n+ echo\nI0603 01:06:17.772800 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\nI0603 01:06:17.772816 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31276/\nI0603 01:06:17.772848 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.772863 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.772878 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.776139 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.776156 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.776169 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.776581 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.776603 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.776615 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.776636 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.776648 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\nI0603 01:06:17.776659 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31276/\nI0603 01:06:17.780023 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.780046 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.780063 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.780444 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.780460 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.780468 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.780495 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.780522 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\nI0603 01:06:17.780550 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31276/\nI0603 01:06:17.784048 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.784069 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.784100 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.784460 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.784477 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.784484 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.784490 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.784495 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\nI0603 01:06:17.784500 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31276/\nI0603 01:06:17.788014 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.788034 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.788057 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.788474 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.788492 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.788499 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.788511 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.788521 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\nI0603 01:06:17.788527 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\nI0603 01:06:17.788532 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.788536 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31276/\nI0603 01:06:17.788546 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\nI0603 01:06:17.792135 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.792154 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.792168 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.792612 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.792636 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\nI0603 01:06:17.792651 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31276/\nI0603 01:06:17.792691 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.792724 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.792736 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.796413 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.796436 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.796463 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.796940 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.796973 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.796990 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.797015 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.797030 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\nI0603 01:06:17.797050 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\nI0603 01:06:17.797063 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.797075 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31276/\nI0603 01:06:17.797100 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\nI0603 01:06:17.800741 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.800760 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.800774 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.801006 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.801034 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.801045 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.801053 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.801068 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\nI0603 01:06:17.801075 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31276/\nI0603 01:06:17.804519 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.804533 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.804550 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.804836 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.804848 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.804855 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.804867 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.804873 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\nI0603 01:06:17.804879 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31276/\nI0603 01:06:17.808408 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.808428 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.808447 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.808753 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.808774 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\nI0603 01:06:17.808805 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31276/\nI0603 01:06:17.808864 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.808881 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.808899 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.812250 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.812266 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.812279 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.812497 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.812520 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.812536 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.812555 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.812564 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\nI0603 01:06:17.812573 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31276/\nI0603 01:06:17.815935 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.815966 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.815994 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.817278 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.817378 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\nI0603 01:06:17.817406 3971 log.go:172] (0xc0000dd860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31276/\nI0603 01:06:17.817429 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.817448 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.817472 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.820480 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.820582 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.820619 3971 log.go:172] (0xc0002e8d20) (3) Data frame sent\nI0603 01:06:17.821064 3971 log.go:172] (0xc0005422c0) Data frame received for 3\nI0603 01:06:17.821092 3971 log.go:172] (0xc0002e8d20) (3) Data frame handling\nI0603 01:06:17.821350 3971 log.go:172] (0xc0005422c0) Data frame received for 5\nI0603 01:06:17.821376 3971 log.go:172] (0xc0000dd860) (5) Data frame handling\nI0603 01:06:17.822886 3971 log.go:172] (0xc0005422c0) Data frame received for 1\nI0603 01:06:17.822922 3971 log.go:172] (0xc0003a34a0) (1) Data frame handling\nI0603 01:06:17.822948 3971 log.go:172] (0xc0003a34a0) (1) Data frame sent\nI0603 01:06:17.822970 3971 log.go:172] (0xc0005422c0) (0xc0003a34a0) Stream removed, broadcasting: 1\nI0603 01:06:17.822988 3971 log.go:172] (0xc0005422c0) Go away received\nI0603 01:06:17.823450 3971 log.go:172] (0xc0005422c0) (0xc0003a34a0) Stream removed, broadcasting: 1\nI0603 01:06:17.823470 3971 log.go:172] (0xc0005422c0) (0xc0002e8d20) Stream removed, broadcasting: 3\nI0603 01:06:17.823489 3971 log.go:172] (0xc0005422c0) (0xc0000dd860) Stream removed, broadcasting: 5\n" Jun 3 01:06:17.828: INFO: stdout: "\naffinity-nodeport-ltvq8\naffinity-nodeport-ltvq8\naffinity-nodeport-ltvq8\naffinity-nodeport-ltvq8\naffinity-nodeport-ltvq8\naffinity-nodeport-ltvq8\naffinity-nodeport-ltvq8\naffinity-nodeport-ltvq8\naffinity-nodeport-ltvq8\naffinity-nodeport-ltvq8\naffinity-nodeport-ltvq8\naffinity-nodeport-ltvq8\naffinity-nodeport-ltvq8\naffinity-nodeport-ltvq8\naffinity-nodeport-ltvq8\naffinity-nodeport-ltvq8" Jun 3 01:06:17.829: INFO: Received response from host: Jun 3 01:06:17.829: INFO: Received response from host: affinity-nodeport-ltvq8 Jun 3 01:06:17.829: INFO: Received response from host: affinity-nodeport-ltvq8 Jun 3 01:06:17.829: INFO: Received response from host: affinity-nodeport-ltvq8 Jun 3 01:06:17.829: INFO: Received response from host: affinity-nodeport-ltvq8 Jun 3 01:06:17.829: INFO: Received response from host: affinity-nodeport-ltvq8 Jun 3 01:06:17.829: INFO: Received response from host: affinity-nodeport-ltvq8 Jun 3 01:06:17.829: INFO: Received response from host: affinity-nodeport-ltvq8 Jun 3 01:06:17.829: INFO: Received response from host: affinity-nodeport-ltvq8 Jun 3 01:06:17.829: INFO: Received response from host: affinity-nodeport-ltvq8 Jun 3 01:06:17.829: INFO: Received response from host: affinity-nodeport-ltvq8 Jun 3 01:06:17.829: INFO: Received response from host: affinity-nodeport-ltvq8 Jun 3 01:06:17.829: INFO: Received response from host: affinity-nodeport-ltvq8 Jun 3 01:06:17.829: INFO: Received response from host: affinity-nodeport-ltvq8 Jun 3 01:06:17.829: INFO: Received response from host: affinity-nodeport-ltvq8 Jun 3 01:06:17.829: INFO: Received response from host: affinity-nodeport-ltvq8 Jun 3 01:06:17.829: INFO: Received response from host: affinity-nodeport-ltvq8 Jun 3 01:06:17.829: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-1947, will wait for the garbage collector to delete the pods Jun 3 01:06:17.948: INFO: Deleting ReplicationController affinity-nodeport took: 5.596823ms Jun 3 01:06:18.348: INFO: Terminating ReplicationController affinity-nodeport pods took: 400.258061ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 01:06:25.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1947" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:20.092 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":276,"skipped":4466,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 01:06:25.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-ec791d7e-b4dd-4238-ae10-965fb2f86ab8 STEP: Creating a pod to test consume configMaps Jun 3 01:06:25.504: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2175d905-644d-42f8-a34d-5a81a4b5b1cf" in namespace "projected-1538" to be "Succeeded or Failed" Jun 3 01:06:25.507: INFO: Pod "pod-projected-configmaps-2175d905-644d-42f8-a34d-5a81a4b5b1cf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.750144ms Jun 3 01:06:27.511: INFO: Pod "pod-projected-configmaps-2175d905-644d-42f8-a34d-5a81a4b5b1cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007155704s Jun 3 01:06:29.515: INFO: Pod "pod-projected-configmaps-2175d905-644d-42f8-a34d-5a81a4b5b1cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011742699s STEP: Saw pod success Jun 3 01:06:29.515: INFO: Pod "pod-projected-configmaps-2175d905-644d-42f8-a34d-5a81a4b5b1cf" satisfied condition "Succeeded or Failed" Jun 3 01:06:29.518: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-2175d905-644d-42f8-a34d-5a81a4b5b1cf container projected-configmap-volume-test: STEP: delete the pod Jun 3 01:06:29.571: INFO: Waiting for pod pod-projected-configmaps-2175d905-644d-42f8-a34d-5a81a4b5b1cf to disappear Jun 3 01:06:29.580: INFO: Pod pod-projected-configmaps-2175d905-644d-42f8-a34d-5a81a4b5b1cf no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 01:06:29.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1538" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":277,"skipped":4517,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 01:06:29.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-9f0b382f-fc5e-4e5d-967d-a2bc88d8f754 STEP: Creating a pod to test consume configMaps Jun 3 01:06:29.704: INFO: Waiting up to 5m0s for pod "pod-configmaps-1e44a10a-a605-4eaf-aad7-d9e6ac2d3134" in namespace "configmap-7892" to be "Succeeded or Failed" Jun 3 01:06:29.712: INFO: Pod "pod-configmaps-1e44a10a-a605-4eaf-aad7-d9e6ac2d3134": Phase="Pending", Reason="", readiness=false. Elapsed: 7.705186ms Jun 3 01:06:31.716: INFO: Pod "pod-configmaps-1e44a10a-a605-4eaf-aad7-d9e6ac2d3134": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012017379s Jun 3 01:06:33.721: INFO: Pod "pod-configmaps-1e44a10a-a605-4eaf-aad7-d9e6ac2d3134": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016361638s STEP: Saw pod success Jun 3 01:06:33.721: INFO: Pod "pod-configmaps-1e44a10a-a605-4eaf-aad7-d9e6ac2d3134" satisfied condition "Succeeded or Failed" Jun 3 01:06:33.723: INFO: Trying to get logs from node latest-worker pod pod-configmaps-1e44a10a-a605-4eaf-aad7-d9e6ac2d3134 container configmap-volume-test: STEP: delete the pod Jun 3 01:06:33.758: INFO: Waiting for pod pod-configmaps-1e44a10a-a605-4eaf-aad7-d9e6ac2d3134 to disappear Jun 3 01:06:33.763: INFO: Pod pod-configmaps-1e44a10a-a605-4eaf-aad7-d9e6ac2d3134 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 01:06:33.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7892" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":278,"skipped":4525,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 01:06:33.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-7ec12124-c1a7-43d3-aaa7-563c079f07bf STEP: Creating a pod to test consume secrets Jun 3 01:06:33.880: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-da26292f-da65-418c-81fa-07158460fa50" in namespace "projected-153" to be "Succeeded or Failed" Jun 3 01:06:33.896: INFO: Pod "pod-projected-secrets-da26292f-da65-418c-81fa-07158460fa50": Phase="Pending", Reason="", readiness=false. Elapsed: 15.308547ms Jun 3 01:06:36.063: INFO: Pod "pod-projected-secrets-da26292f-da65-418c-81fa-07158460fa50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182128875s Jun 3 01:06:38.129: INFO: Pod "pod-projected-secrets-da26292f-da65-418c-81fa-07158460fa50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.248790445s STEP: Saw pod success Jun 3 01:06:38.129: INFO: Pod "pod-projected-secrets-da26292f-da65-418c-81fa-07158460fa50" satisfied condition "Succeeded or Failed" Jun 3 01:06:38.132: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-da26292f-da65-418c-81fa-07158460fa50 container secret-volume-test: STEP: delete the pod Jun 3 01:06:38.304: INFO: Waiting for pod pod-projected-secrets-da26292f-da65-418c-81fa-07158460fa50 to disappear Jun 3 01:06:38.316: INFO: Pod pod-projected-secrets-da26292f-da65-418c-81fa-07158460fa50 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 01:06:38.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-153" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":279,"skipped":4564,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 01:06:38.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3906.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3906.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3906.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3906.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 01:06:44.459: INFO: DNS probes using dns-test-a835d165-6b30-4333-9648-3458a1c78131 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3906.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3906.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3906.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3906.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 01:06:50.687: INFO: File wheezy_udp@dns-test-service-3.dns-3906.svc.cluster.local from pod dns-3906/dns-test-a411d4d3-d1cd-4a54-896b-0376f6cf0e11 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 01:06:50.691: INFO: File jessie_udp@dns-test-service-3.dns-3906.svc.cluster.local from pod dns-3906/dns-test-a411d4d3-d1cd-4a54-896b-0376f6cf0e11 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 01:06:50.691: INFO: Lookups using dns-3906/dns-test-a411d4d3-d1cd-4a54-896b-0376f6cf0e11 failed for: [wheezy_udp@dns-test-service-3.dns-3906.svc.cluster.local jessie_udp@dns-test-service-3.dns-3906.svc.cluster.local] Jun 3 01:06:55.696: INFO: File wheezy_udp@dns-test-service-3.dns-3906.svc.cluster.local from pod dns-3906/dns-test-a411d4d3-d1cd-4a54-896b-0376f6cf0e11 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 01:06:55.700: INFO: File jessie_udp@dns-test-service-3.dns-3906.svc.cluster.local from pod dns-3906/dns-test-a411d4d3-d1cd-4a54-896b-0376f6cf0e11 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 01:06:55.700: INFO: Lookups using dns-3906/dns-test-a411d4d3-d1cd-4a54-896b-0376f6cf0e11 failed for: [wheezy_udp@dns-test-service-3.dns-3906.svc.cluster.local jessie_udp@dns-test-service-3.dns-3906.svc.cluster.local] Jun 3 01:07:00.696: INFO: File wheezy_udp@dns-test-service-3.dns-3906.svc.cluster.local from pod dns-3906/dns-test-a411d4d3-d1cd-4a54-896b-0376f6cf0e11 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 01:07:00.700: INFO: File jessie_udp@dns-test-service-3.dns-3906.svc.cluster.local from pod dns-3906/dns-test-a411d4d3-d1cd-4a54-896b-0376f6cf0e11 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 01:07:00.700: INFO: Lookups using dns-3906/dns-test-a411d4d3-d1cd-4a54-896b-0376f6cf0e11 failed for: [wheezy_udp@dns-test-service-3.dns-3906.svc.cluster.local jessie_udp@dns-test-service-3.dns-3906.svc.cluster.local] Jun 3 01:07:05.697: INFO: File wheezy_udp@dns-test-service-3.dns-3906.svc.cluster.local from pod dns-3906/dns-test-a411d4d3-d1cd-4a54-896b-0376f6cf0e11 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 01:07:05.701: INFO: File jessie_udp@dns-test-service-3.dns-3906.svc.cluster.local from pod dns-3906/dns-test-a411d4d3-d1cd-4a54-896b-0376f6cf0e11 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 01:07:05.701: INFO: Lookups using dns-3906/dns-test-a411d4d3-d1cd-4a54-896b-0376f6cf0e11 failed for: [wheezy_udp@dns-test-service-3.dns-3906.svc.cluster.local jessie_udp@dns-test-service-3.dns-3906.svc.cluster.local] Jun 3 01:07:10.697: INFO: File wheezy_udp@dns-test-service-3.dns-3906.svc.cluster.local from pod dns-3906/dns-test-a411d4d3-d1cd-4a54-896b-0376f6cf0e11 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 01:07:10.701: INFO: File jessie_udp@dns-test-service-3.dns-3906.svc.cluster.local from pod dns-3906/dns-test-a411d4d3-d1cd-4a54-896b-0376f6cf0e11 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 01:07:10.701: INFO: Lookups using dns-3906/dns-test-a411d4d3-d1cd-4a54-896b-0376f6cf0e11 failed for: [wheezy_udp@dns-test-service-3.dns-3906.svc.cluster.local jessie_udp@dns-test-service-3.dns-3906.svc.cluster.local] Jun 3 01:07:15.700: INFO: DNS probes using dns-test-a411d4d3-d1cd-4a54-896b-0376f6cf0e11 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3906.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3906.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3906.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3906.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 01:07:24.609: INFO: DNS probes using dns-test-47f18e15-3c6c-427f-afd3-1b7c68e35f7e succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 01:07:24.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3906" for this suite. • [SLOW TEST:46.416 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":280,"skipped":4634,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 01:07:24.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 01:07:24.832: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 3 01:07:29.847: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 3 01:07:29.847: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jun 3 01:07:34.014: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9644 /apis/apps/v1/namespaces/deployment-9644/deployments/test-cleanup-deployment 7040c471-b44a-4404-940d-670f1f7b9f11 9822810 1 2020-06-03 01:07:29 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2020-06-03 01:07:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-03 01:07:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0044fea68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-06-03 01:07:30 +0000 UTC,LastTransitionTime:2020-06-03 01:07:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-6688745694" has successfully progressed.,LastUpdateTime:2020-06-03 01:07:33 +0000 UTC,LastTransitionTime:2020-06-03 01:07:30 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 3 01:07:34.051: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-9644 /apis/apps/v1/namespaces/deployment-9644/replicasets/test-cleanup-deployment-6688745694 4c0f9ba7-1915-44a0-8da2-15cdd8b2f7b6 9822799 1 2020-06-03 01:07:29 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 7040c471-b44a-4404-940d-670f1f7b9f11 0xc000f1a507 0xc000f1a508}] [] [{kube-controller-manager Update apps/v1 2020-06-03 01:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7040c471-b44a-4404-940d-670f1f7b9f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000f1a6d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 3 01:07:34.057: INFO: Pod "test-cleanup-deployment-6688745694-bmwgh" is available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-bmwgh test-cleanup-deployment-6688745694- deployment-9644 /api/v1/namespaces/deployment-9644/pods/test-cleanup-deployment-6688745694-bmwgh 50973856-edf0-4678-b61e-42fe23660106 9822798 0 2020-06-03 01:07:30 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 4c0f9ba7-1915-44a0-8da2-15cdd8b2f7b6 0xc000f1b387 0xc000f1b388}] [] [{kube-controller-manager Update v1 2020-06-03 01:07:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c0f9ba7-1915-44a0-8da2-15cdd8b2f7b6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-03 01:07:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.210\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gn2cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gn2cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gn2cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 01:07:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 01:07:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 01:07:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 01:07:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.210,StartTime:2020-06-03 01:07:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-03 01:07:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://02dbd7a4715117d57522c4211cf0dd174cfb80dfb2483e92329fcbc00421cc06,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.210,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 01:07:34.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9644" for this suite. • [SLOW TEST:9.325 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":281,"skipped":4651,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 01:07:34.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 01:07:34.593: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 01:07:36.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726743254, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726743254, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726743254, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726743254, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 01:07:39.704: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 01:07:39.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7296-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 01:07:42.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5948" for this suite. STEP: Destroying namespace "webhook-5948-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.737 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":282,"skipped":4652,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 01:07:42.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-680ab95d-ef51-4274-9593-527890e8043a STEP: Creating a pod to test consume secrets Jun 3 01:07:42.880: INFO: Waiting up to 5m0s for pod "pod-secrets-cd1762ac-1b35-47f7-a78d-85715618688e" in namespace "secrets-6039" to be "Succeeded or Failed" Jun 3 01:07:42.882: INFO: Pod "pod-secrets-cd1762ac-1b35-47f7-a78d-85715618688e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082835ms Jun 3 01:07:44.886: INFO: Pod "pod-secrets-cd1762ac-1b35-47f7-a78d-85715618688e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006025113s Jun 3 01:07:46.890: INFO: Pod "pod-secrets-cd1762ac-1b35-47f7-a78d-85715618688e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010343608s STEP: Saw pod success Jun 3 01:07:46.890: INFO: Pod "pod-secrets-cd1762ac-1b35-47f7-a78d-85715618688e" satisfied condition "Succeeded or Failed" Jun 3 01:07:46.893: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-cd1762ac-1b35-47f7-a78d-85715618688e container secret-volume-test: STEP: delete the pod Jun 3 01:07:46.926: INFO: Waiting for pod pod-secrets-cd1762ac-1b35-47f7-a78d-85715618688e to disappear Jun 3 01:07:46.935: INFO: Pod pod-secrets-cd1762ac-1b35-47f7-a78d-85715618688e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 01:07:46.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6039" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":283,"skipped":4660,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 01:07:46.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 3 01:07:47.070: INFO: Waiting up to 5m0s for pod "pod-dcfada63-0c31-4806-b0d5-12e6b732a9a5" in namespace "emptydir-4811" to be "Succeeded or Failed" Jun 3 01:07:47.079: INFO: Pod "pod-dcfada63-0c31-4806-b0d5-12e6b732a9a5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.556726ms Jun 3 01:07:49.763: INFO: Pod "pod-dcfada63-0c31-4806-b0d5-12e6b732a9a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.693359836s Jun 3 01:07:51.768: INFO: Pod "pod-dcfada63-0c31-4806-b0d5-12e6b732a9a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.697713929s Jun 3 01:07:53.772: INFO: Pod "pod-dcfada63-0c31-4806-b0d5-12e6b732a9a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.701996333s STEP: Saw pod success Jun 3 01:07:53.772: INFO: Pod "pod-dcfada63-0c31-4806-b0d5-12e6b732a9a5" satisfied condition "Succeeded or Failed" Jun 3 01:07:53.774: INFO: Trying to get logs from node latest-worker pod pod-dcfada63-0c31-4806-b0d5-12e6b732a9a5 container test-container: STEP: delete the pod Jun 3 01:07:53.804: INFO: Waiting for pod pod-dcfada63-0c31-4806-b0d5-12e6b732a9a5 to disappear Jun 3 01:07:53.815: INFO: Pod pod-dcfada63-0c31-4806-b0d5-12e6b732a9a5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 01:07:53.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4811" for this suite. • [SLOW TEST:6.879 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":284,"skipped":4664,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 01:07:53.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 01:07:53.900: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 01:07:53.932: INFO: Waiting for terminating namespaces to be deleted... Jun 3 01:07:53.934: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jun 3 01:07:53.941: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jun 3 01:07:53.941: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jun 3 01:07:53.941: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jun 3 01:07:53.941: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jun 3 01:07:53.941: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 3 01:07:53.941: INFO: Container kindnet-cni ready: true, restart count 2 Jun 3 01:07:53.941: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 3 01:07:53.941: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 01:07:53.941: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jun 3 01:07:53.948: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jun 3 01:07:53.948: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jun 3 01:07:53.948: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jun 3 01:07:53.948: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jun 3 01:07:53.948: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 3 01:07:53.948: INFO: Container kindnet-cni ready: true, restart count 2 Jun 3 01:07:53.948: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 3 01:07:53.948: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-375c1928-e61b-440b-b92a-c70f991c7947 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-375c1928-e61b-440b-b92a-c70f991c7947 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-375c1928-e61b-440b-b92a-c70f991c7947 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 01:08:02.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6628" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.371 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":285,"skipped":4735,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 01:08:02.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 01:08:03.145: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 01:08:05.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726743283, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726743283, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726743283, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726743283, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 01:08:08.218: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 3 01:08:08.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1377-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 01:08:09.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1195" for this suite. STEP: Destroying namespace "webhook-1195-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.252 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":286,"skipped":4754,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 01:08:09.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 01:08:10.350: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 01:08:12.441: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726743290, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726743290, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726743290, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726743290, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 01:08:15.651: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 01:08:15.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2189" for this suite. STEP: Destroying namespace "webhook-2189-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.509 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":287,"skipped":4770,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 3 01:08:15.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Jun 3 01:08:16.064: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 3 01:08:33.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2774" for this suite. • [SLOW TEST:17.526 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":288,"skipped":4780,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSJun 3 01:08:33.482: INFO: Running AfterSuite actions on all nodes Jun 3 01:08:33.492: INFO: Running AfterSuite actions on node 1 Jun 3 01:08:33.492: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4807,"failed":0} Ran 288 of 5095 Specs in 5403.645 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4807 Skipped PASS