I0430 23:37:36.591132 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0430 23:37:36.591359 7 e2e.go:129] Starting e2e run "9ab8d1a1-3d60-4d70-a889-b678d634ffae" on Ginkgo node 1 {"msg":"Test Suite starting","total":290,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588289855 - Will randomize all specs Will run 290 of 5093 specs Apr 30 23:37:36.644: INFO: >>> kubeConfig: /root/.kube/config Apr 30 23:37:36.648: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 30 23:37:36.668: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 30 23:37:36.731: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 30 23:37:36.731: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 30 23:37:36.731: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 30 23:37:36.744: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 30 23:37:36.744: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 30 23:37:36.744: INFO: e2e test version: v1.19.0-alpha.2.232+a26c34e47007df Apr 30 23:37:36.746: INFO: kube-apiserver version: v1.18.2 Apr 30 23:37:36.746: INFO: >>> kubeConfig: /root/.kube/config Apr 30 23:37:36.752: INFO: Cluster IP family: ipv4 S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:37:36.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Apr 30 23:37:36.846: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 30 23:37:36.858: INFO: Waiting up to 5m0s for pod "pod-686360d9-322c-4256-9b6d-1df3b8615946" in namespace "emptydir-6660" to be "Succeeded or Failed" Apr 30 23:37:36.863: INFO: Pod "pod-686360d9-322c-4256-9b6d-1df3b8615946": Phase="Pending", Reason="", readiness=false. Elapsed: 5.162679ms Apr 30 23:37:38.888: INFO: Pod "pod-686360d9-322c-4256-9b6d-1df3b8615946": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030415967s Apr 30 23:37:40.893: INFO: Pod "pod-686360d9-322c-4256-9b6d-1df3b8615946": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034434532s STEP: Saw pod success Apr 30 23:37:40.893: INFO: Pod "pod-686360d9-322c-4256-9b6d-1df3b8615946" satisfied condition "Succeeded or Failed" Apr 30 23:37:40.896: INFO: Trying to get logs from node latest-worker pod pod-686360d9-322c-4256-9b6d-1df3b8615946 container test-container: STEP: delete the pod Apr 30 23:37:40.952: INFO: Waiting for pod pod-686360d9-322c-4256-9b6d-1df3b8615946 to disappear Apr 30 23:37:40.969: INFO: Pod pod-686360d9-322c-4256-9b6d-1df3b8615946 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:37:40.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6660" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":1,"skipped":1,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:37:41.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:37:41.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-604" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":290,"completed":2,"skipped":17,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:37:41.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 30 23:37:42.156: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 30 23:37:44.165: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723886662, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723886662, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723886662, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723886662, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 30 23:37:47.246: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:37:47.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2427" for this suite. STEP: Destroying namespace "webhook-2427-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.359 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":290,"completed":3,"skipped":75,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:37:47.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:37:52.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-785" for this suite. • [SLOW TEST:5.240 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":290,"completed":4,"skipped":88,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:37:52.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-806f8355-083a-4c0e-bfe7-83f4b4731d5b STEP: Creating a pod to test consume configMaps Apr 30 23:37:52.850: INFO: Waiting up to 5m0s for pod "pod-configmaps-fdd1963b-7bda-4b88-b75b-d88c23b8a134" in namespace "configmap-4306" to be "Succeeded or Failed" Apr 30 23:37:52.857: INFO: Pod "pod-configmaps-fdd1963b-7bda-4b88-b75b-d88c23b8a134": Phase="Pending", Reason="", readiness=false. Elapsed: 6.651713ms Apr 30 23:37:54.894: INFO: Pod "pod-configmaps-fdd1963b-7bda-4b88-b75b-d88c23b8a134": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044086553s Apr 30 23:37:56.900: INFO: Pod "pod-configmaps-fdd1963b-7bda-4b88-b75b-d88c23b8a134": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049355701s STEP: Saw pod success Apr 30 23:37:56.900: INFO: Pod "pod-configmaps-fdd1963b-7bda-4b88-b75b-d88c23b8a134" satisfied condition "Succeeded or Failed" Apr 30 23:37:56.903: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-fdd1963b-7bda-4b88-b75b-d88c23b8a134 container configmap-volume-test: STEP: delete the pod Apr 30 23:37:56.956: INFO: Waiting for pod pod-configmaps-fdd1963b-7bda-4b88-b75b-d88c23b8a134 to disappear Apr 30 23:37:56.972: INFO: Pod pod-configmaps-fdd1963b-7bda-4b88-b75b-d88c23b8a134 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:37:56.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4306" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":5,"skipped":120,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:37:56.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:38:01.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-809" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":290,"completed":6,"skipped":157,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:38:01.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-gwzh STEP: Creating a pod to test atomic-volume-subpath Apr 30 23:38:01.299: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gwzh" in namespace "subpath-3212" to be "Succeeded or Failed" Apr 30 23:38:01.326: INFO: Pod "pod-subpath-test-configmap-gwzh": Phase="Pending", Reason="", readiness=false. Elapsed: 27.171031ms Apr 30 23:38:03.421: INFO: Pod "pod-subpath-test-configmap-gwzh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121645784s Apr 30 23:38:05.425: INFO: Pod "pod-subpath-test-configmap-gwzh": Phase="Running", Reason="", readiness=true. Elapsed: 4.126302314s Apr 30 23:38:07.430: INFO: Pod "pod-subpath-test-configmap-gwzh": Phase="Running", Reason="", readiness=true. Elapsed: 6.130642283s Apr 30 23:38:09.434: INFO: Pod "pod-subpath-test-configmap-gwzh": Phase="Running", Reason="", readiness=true. Elapsed: 8.135169594s Apr 30 23:38:11.439: INFO: Pod "pod-subpath-test-configmap-gwzh": Phase="Running", Reason="", readiness=true. Elapsed: 10.1396467s Apr 30 23:38:13.443: INFO: Pod "pod-subpath-test-configmap-gwzh": Phase="Running", Reason="", readiness=true. Elapsed: 12.144182149s Apr 30 23:38:15.447: INFO: Pod "pod-subpath-test-configmap-gwzh": Phase="Running", Reason="", readiness=true. Elapsed: 14.148283708s Apr 30 23:38:17.450: INFO: Pod "pod-subpath-test-configmap-gwzh": Phase="Running", Reason="", readiness=true. Elapsed: 16.151441016s Apr 30 23:38:19.454: INFO: Pod "pod-subpath-test-configmap-gwzh": Phase="Running", Reason="", readiness=true. Elapsed: 18.155423952s Apr 30 23:38:21.458: INFO: Pod "pod-subpath-test-configmap-gwzh": Phase="Running", Reason="", readiness=true. Elapsed: 20.15911142s Apr 30 23:38:23.462: INFO: Pod "pod-subpath-test-configmap-gwzh": Phase="Running", Reason="", readiness=true. Elapsed: 22.163328246s Apr 30 23:38:25.468: INFO: Pod "pod-subpath-test-configmap-gwzh": Phase="Running", Reason="", readiness=true. Elapsed: 24.168741002s Apr 30 23:38:27.472: INFO: Pod "pod-subpath-test-configmap-gwzh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.172673777s STEP: Saw pod success Apr 30 23:38:27.472: INFO: Pod "pod-subpath-test-configmap-gwzh" satisfied condition "Succeeded or Failed" Apr 30 23:38:27.474: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-gwzh container test-container-subpath-configmap-gwzh: STEP: delete the pod Apr 30 23:38:27.537: INFO: Waiting for pod pod-subpath-test-configmap-gwzh to disappear Apr 30 23:38:27.541: INFO: Pod pod-subpath-test-configmap-gwzh no longer exists STEP: Deleting pod pod-subpath-test-configmap-gwzh Apr 30 23:38:27.541: INFO: Deleting pod "pod-subpath-test-configmap-gwzh" in namespace "subpath-3212" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:38:27.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3212" for this suite. • [SLOW TEST:26.413 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":290,"completed":7,"skipped":159,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:38:27.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-5ddb6e77-4e88-44b4-9803-9e69ccc75a57 STEP: Creating a pod to test consume configMaps Apr 30 23:38:27.682: INFO: Waiting up to 5m0s for pod "pod-configmaps-b13c60a2-e6bb-4c17-be2c-95fd3b9df534" in namespace "configmap-748" to be "Succeeded or Failed" Apr 30 23:38:27.698: INFO: Pod "pod-configmaps-b13c60a2-e6bb-4c17-be2c-95fd3b9df534": Phase="Pending", Reason="", readiness=false. Elapsed: 15.621834ms Apr 30 23:38:29.703: INFO: Pod "pod-configmaps-b13c60a2-e6bb-4c17-be2c-95fd3b9df534": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020451817s Apr 30 23:38:31.707: INFO: Pod "pod-configmaps-b13c60a2-e6bb-4c17-be2c-95fd3b9df534": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02481532s STEP: Saw pod success Apr 30 23:38:31.707: INFO: Pod "pod-configmaps-b13c60a2-e6bb-4c17-be2c-95fd3b9df534" satisfied condition "Succeeded or Failed" Apr 30 23:38:31.711: INFO: Trying to get logs from node latest-worker pod pod-configmaps-b13c60a2-e6bb-4c17-be2c-95fd3b9df534 container configmap-volume-test: STEP: delete the pod Apr 30 23:38:31.764: INFO: Waiting for pod pod-configmaps-b13c60a2-e6bb-4c17-be2c-95fd3b9df534 to disappear Apr 30 23:38:31.788: INFO: Pod pod-configmaps-b13c60a2-e6bb-4c17-be2c-95fd3b9df534 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:38:31.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-748" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":8,"skipped":176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:38:31.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Apr 30 23:38:36.427: INFO: Successfully updated pod "labelsupdate58faa47d-b434-4bce-8ca3-e5bb4748099d" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:38:38.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-336" for this suite. • [SLOW TEST:6.691 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":290,"completed":9,"skipped":298,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:38:38.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-2396/configmap-test-2c618a8f-6e4b-4ebc-947a-883dd6110d0c STEP: Creating a pod to test consume configMaps Apr 30 23:38:38.591: INFO: Waiting up to 5m0s for pod "pod-configmaps-9523a660-491c-4592-b2e0-f4b9cf684274" in namespace "configmap-2396" to be "Succeeded or Failed" Apr 30 23:38:38.602: INFO: Pod "pod-configmaps-9523a660-491c-4592-b2e0-f4b9cf684274": Phase="Pending", Reason="", readiness=false. Elapsed: 10.83527ms Apr 30 23:38:40.614: INFO: Pod "pod-configmaps-9523a660-491c-4592-b2e0-f4b9cf684274": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023366652s Apr 30 23:38:42.618: INFO: Pod "pod-configmaps-9523a660-491c-4592-b2e0-f4b9cf684274": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027455669s STEP: Saw pod success Apr 30 23:38:42.618: INFO: Pod "pod-configmaps-9523a660-491c-4592-b2e0-f4b9cf684274" satisfied condition "Succeeded or Failed" Apr 30 23:38:42.643: INFO: Trying to get logs from node latest-worker pod pod-configmaps-9523a660-491c-4592-b2e0-f4b9cf684274 container env-test: STEP: delete the pod Apr 30 23:38:42.686: INFO: Waiting for pod pod-configmaps-9523a660-491c-4592-b2e0-f4b9cf684274 to disappear Apr 30 23:38:42.704: INFO: Pod pod-configmaps-9523a660-491c-4592-b2e0-f4b9cf684274 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:38:42.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2396" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":290,"completed":10,"skipped":309,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:38:42.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Apr 30 23:38:43.068: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:38:58.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7733" for this suite. • [SLOW TEST:15.762 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":290,"completed":11,"skipped":321,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:38:58.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-4837fcb3-b68b-4aba-9fb9-d7002e2c5166 STEP: Creating a pod to test consume secrets Apr 30 23:38:58.777: INFO: Waiting up to 5m0s for pod "pod-secrets-096cdc3a-6cb5-4a8f-ac4b-42173e967ac4" in namespace "secrets-7073" to be "Succeeded or Failed" Apr 30 23:38:58.781: INFO: Pod "pod-secrets-096cdc3a-6cb5-4a8f-ac4b-42173e967ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.895975ms Apr 30 23:39:00.835: INFO: Pod "pod-secrets-096cdc3a-6cb5-4a8f-ac4b-42173e967ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057903255s Apr 30 23:39:02.839: INFO: Pod "pod-secrets-096cdc3a-6cb5-4a8f-ac4b-42173e967ac4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061584359s STEP: Saw pod success Apr 30 23:39:02.839: INFO: Pod "pod-secrets-096cdc3a-6cb5-4a8f-ac4b-42173e967ac4" satisfied condition "Succeeded or Failed" Apr 30 23:39:02.841: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-096cdc3a-6cb5-4a8f-ac4b-42173e967ac4 container secret-env-test: STEP: delete the pod Apr 30 23:39:02.896: INFO: Waiting for pod pod-secrets-096cdc3a-6cb5-4a8f-ac4b-42173e967ac4 to disappear Apr 30 23:39:02.914: INFO: Pod pod-secrets-096cdc3a-6cb5-4a8f-ac4b-42173e967ac4 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:39:02.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7073" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":290,"completed":12,"skipped":329,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:39:02.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:39:03.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9742" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":290,"completed":13,"skipped":348,"failed":0} ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:39:03.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:39:03.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4485" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":290,"completed":14,"skipped":348,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:39:03.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:39:19.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1464" for this suite. • [SLOW TEST:16.105 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":290,"completed":15,"skipped":394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:39:19.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Apr 30 23:39:19.479: INFO: Waiting up to 5m0s for pod "var-expansion-dc71302d-51cb-427c-9eef-9e647ec26d72" in namespace "var-expansion-7537" to be "Succeeded or Failed" Apr 30 23:39:19.489: INFO: Pod "var-expansion-dc71302d-51cb-427c-9eef-9e647ec26d72": Phase="Pending", Reason="", readiness=false. Elapsed: 9.594645ms Apr 30 23:39:21.546: INFO: Pod "var-expansion-dc71302d-51cb-427c-9eef-9e647ec26d72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066718633s Apr 30 23:39:23.551: INFO: Pod "var-expansion-dc71302d-51cb-427c-9eef-9e647ec26d72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071387318s STEP: Saw pod success Apr 30 23:39:23.551: INFO: Pod "var-expansion-dc71302d-51cb-427c-9eef-9e647ec26d72" satisfied condition "Succeeded or Failed" Apr 30 23:39:23.554: INFO: Trying to get logs from node latest-worker2 pod var-expansion-dc71302d-51cb-427c-9eef-9e647ec26d72 container dapi-container: STEP: delete the pod Apr 30 23:39:23.584: INFO: Waiting for pod var-expansion-dc71302d-51cb-427c-9eef-9e647ec26d72 to disappear Apr 30 23:39:23.596: INFO: Pod var-expansion-dc71302d-51cb-427c-9eef-9e647ec26d72 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:39:23.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7537" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":290,"completed":16,"skipped":427,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:39:23.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 30 23:39:23.950: INFO: Waiting up to 5m0s for pod "pod-a7073a1d-61ad-47ad-99c0-1d17f7d4bd07" in namespace "emptydir-5396" to be "Succeeded or Failed" Apr 30 23:39:23.968: INFO: Pod "pod-a7073a1d-61ad-47ad-99c0-1d17f7d4bd07": Phase="Pending", Reason="", readiness=false. Elapsed: 17.904495ms Apr 30 23:39:26.033: INFO: Pod "pod-a7073a1d-61ad-47ad-99c0-1d17f7d4bd07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082790328s Apr 30 23:39:28.051: INFO: Pod "pod-a7073a1d-61ad-47ad-99c0-1d17f7d4bd07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100819579s STEP: Saw pod success Apr 30 23:39:28.051: INFO: Pod "pod-a7073a1d-61ad-47ad-99c0-1d17f7d4bd07" satisfied condition "Succeeded or Failed" Apr 30 23:39:28.054: INFO: Trying to get logs from node latest-worker pod pod-a7073a1d-61ad-47ad-99c0-1d17f7d4bd07 container test-container: STEP: delete the pod Apr 30 23:39:28.184: INFO: Waiting for pod pod-a7073a1d-61ad-47ad-99c0-1d17f7d4bd07 to disappear Apr 30 23:39:28.191: INFO: Pod pod-a7073a1d-61ad-47ad-99c0-1d17f7d4bd07 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:39:28.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5396" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":17,"skipped":437,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:39:28.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 30 23:39:28.269: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fafc7511-758e-4161-950c-ff6817b02f75" in namespace "downward-api-7468" to be "Succeeded or Failed" Apr 30 23:39:28.309: INFO: Pod "downwardapi-volume-fafc7511-758e-4161-950c-ff6817b02f75": Phase="Pending", Reason="", readiness=false. Elapsed: 39.987825ms Apr 30 23:39:30.313: INFO: Pod "downwardapi-volume-fafc7511-758e-4161-950c-ff6817b02f75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044556531s Apr 30 23:39:32.317: INFO: Pod "downwardapi-volume-fafc7511-758e-4161-950c-ff6817b02f75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048380594s STEP: Saw pod success Apr 30 23:39:32.317: INFO: Pod "downwardapi-volume-fafc7511-758e-4161-950c-ff6817b02f75" satisfied condition "Succeeded or Failed" Apr 30 23:39:32.319: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-fafc7511-758e-4161-950c-ff6817b02f75 container client-container: STEP: delete the pod Apr 30 23:39:32.356: INFO: Waiting for pod downwardapi-volume-fafc7511-758e-4161-950c-ff6817b02f75 to disappear Apr 30 23:39:32.369: INFO: Pod downwardapi-volume-fafc7511-758e-4161-950c-ff6817b02f75 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:39:32.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7468" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":290,"completed":18,"skipped":444,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:39:32.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 30 23:39:32.455: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 30 23:39:32.493: INFO: Waiting for terminating namespaces to be deleted... Apr 30 23:39:32.496: INFO: Logging pods the apiserver thinks is on node latest-worker before test Apr 30 23:39:32.501: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Apr 30 23:39:32.501: INFO: Container kindnet-cni ready: true, restart count 0 Apr 30 23:39:32.501: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Apr 30 23:39:32.501: INFO: Container kube-proxy ready: true, restart count 0 Apr 30 23:39:32.501: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Apr 30 23:39:32.505: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Apr 30 23:39:32.505: INFO: Container kindnet-cni ready: true, restart count 0 Apr 30 23:39:32.505: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Apr 30 23:39:32.505: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-29902a4a-ebc5-47d8-a87e-40f9d63b0c9f 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-29902a4a-ebc5-47d8-a87e-40f9d63b0c9f off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-29902a4a-ebc5-47d8-a87e-40f9d63b0c9f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:39:40.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5533" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.389 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":290,"completed":19,"skipped":481,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:39:40.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 30 23:39:41.667: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 30 23:39:43.744: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723886781, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723886781, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723886781, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723886781, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 30 23:39:46.830: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:39:47.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2271" for this suite. STEP: Destroying namespace "webhook-2271-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.597 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":290,"completed":20,"skipped":490,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:39:47.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Apr 30 23:39:47.434: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix733255743/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:39:47.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-503" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":290,"completed":21,"skipped":509,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:39:47.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 30 23:39:47.831: INFO: Waiting up to 5m0s for pod "busybox-user-65534-a649d177-a701-4641-be2c-07b091ade9ed" in namespace "security-context-test-5515" to be "Succeeded or Failed" Apr 30 23:39:48.127: INFO: Pod "busybox-user-65534-a649d177-a701-4641-be2c-07b091ade9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 296.625797ms Apr 30 23:39:50.132: INFO: Pod "busybox-user-65534-a649d177-a701-4641-be2c-07b091ade9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.30095751s Apr 30 23:39:52.135: INFO: Pod "busybox-user-65534-a649d177-a701-4641-be2c-07b091ade9ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.304421666s Apr 30 23:39:52.135: INFO: Pod "busybox-user-65534-a649d177-a701-4641-be2c-07b091ade9ed" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:39:52.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5515" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":22,"skipped":538,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:39:52.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0430 23:39:53.074561 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 30 23:39:53.074: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:39:53.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8077" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":290,"completed":23,"skipped":548,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:39:53.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:39:59.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8400" for this suite. STEP: Destroying namespace "nsdeletetest-1613" for this suite. Apr 30 23:39:59.590: INFO: Namespace nsdeletetest-1613 was already deleted STEP: Destroying namespace "nsdeletetest-9356" for this suite. • [SLOW TEST:6.512 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":290,"completed":24,"skipped":587,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:39:59.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-1703/secret-test-e5007a48-ef54-4754-84ca-e95a8f35c76f STEP: Creating a pod to test consume secrets Apr 30 23:39:59.663: INFO: Waiting up to 5m0s for pod "pod-configmaps-172535a8-0a3c-456b-b44c-816cff3f1a56" in namespace "secrets-1703" to be "Succeeded or Failed" Apr 30 23:39:59.682: INFO: Pod "pod-configmaps-172535a8-0a3c-456b-b44c-816cff3f1a56": Phase="Pending", Reason="", readiness=false. Elapsed: 18.703247ms Apr 30 23:40:01.687: INFO: Pod "pod-configmaps-172535a8-0a3c-456b-b44c-816cff3f1a56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023596076s Apr 30 23:40:03.692: INFO: Pod "pod-configmaps-172535a8-0a3c-456b-b44c-816cff3f1a56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028665349s STEP: Saw pod success Apr 30 23:40:03.692: INFO: Pod "pod-configmaps-172535a8-0a3c-456b-b44c-816cff3f1a56" satisfied condition "Succeeded or Failed" Apr 30 23:40:03.696: INFO: Trying to get logs from node latest-worker pod pod-configmaps-172535a8-0a3c-456b-b44c-816cff3f1a56 container env-test: STEP: delete the pod Apr 30 23:40:03.731: INFO: Waiting for pod pod-configmaps-172535a8-0a3c-456b-b44c-816cff3f1a56 to disappear Apr 30 23:40:03.738: INFO: Pod pod-configmaps-172535a8-0a3c-456b-b44c-816cff3f1a56 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:40:03.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1703" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":290,"completed":25,"skipped":593,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:40:03.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 30 23:40:03.833: INFO: Waiting up to 5m0s for pod "downwardapi-volume-21d9b68b-985e-4efe-8166-ab79b09e1a50" in namespace "downward-api-3096" to be "Succeeded or Failed" Apr 30 23:40:03.868: INFO: Pod "downwardapi-volume-21d9b68b-985e-4efe-8166-ab79b09e1a50": Phase="Pending", Reason="", readiness=false. Elapsed: 35.046729ms Apr 30 23:40:05.872: INFO: Pod "downwardapi-volume-21d9b68b-985e-4efe-8166-ab79b09e1a50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03911307s Apr 30 23:40:07.877: INFO: Pod "downwardapi-volume-21d9b68b-985e-4efe-8166-ab79b09e1a50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044081574s STEP: Saw pod success Apr 30 23:40:07.877: INFO: Pod "downwardapi-volume-21d9b68b-985e-4efe-8166-ab79b09e1a50" satisfied condition "Succeeded or Failed" Apr 30 23:40:07.880: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-21d9b68b-985e-4efe-8166-ab79b09e1a50 container client-container: STEP: delete the pod Apr 30 23:40:07.918: INFO: Waiting for pod downwardapi-volume-21d9b68b-985e-4efe-8166-ab79b09e1a50 to disappear Apr 30 23:40:07.930: INFO: Pod downwardapi-volume-21d9b68b-985e-4efe-8166-ab79b09e1a50 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:40:07.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3096" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":290,"completed":26,"skipped":594,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:40:07.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 30 23:40:08.775: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 30 23:40:10.983: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723886808, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723886808, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723886808, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723886808, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 30 23:40:13.028: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723886808, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723886808, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723886808, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723886808, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 30 23:40:16.131: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:40:28.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5473" for this suite. STEP: Destroying namespace "webhook-5473-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.576 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":290,"completed":27,"skipped":597,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:40:28.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:40:39.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7281" for this suite. • [SLOW TEST:11.168 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":290,"completed":28,"skipped":611,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:40:39.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Apr 30 23:40:39.827: INFO: namespace kubectl-4611 Apr 30 23:40:39.828: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4611' Apr 30 23:40:42.674: INFO: stderr: "" Apr 30 23:40:42.674: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 30 23:40:43.693: INFO: Selector matched 1 pods for map[app:agnhost] Apr 30 23:40:43.693: INFO: Found 0 / 1 Apr 30 23:40:44.679: INFO: Selector matched 1 pods for map[app:agnhost] Apr 30 23:40:44.679: INFO: Found 0 / 1 Apr 30 23:40:45.705: INFO: Selector matched 1 pods for map[app:agnhost] Apr 30 23:40:45.705: INFO: Found 1 / 1 Apr 30 23:40:45.705: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 30 23:40:45.708: INFO: Selector matched 1 pods for map[app:agnhost] Apr 30 23:40:45.708: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 30 23:40:45.708: INFO: wait on agnhost-master startup in kubectl-4611 Apr 30 23:40:45.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-b5mnf agnhost-master --namespace=kubectl-4611' Apr 30 23:40:45.832: INFO: stderr: "" Apr 30 23:40:45.832: INFO: stdout: "Paused\n" STEP: exposing RC Apr 30 23:40:45.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4611' Apr 30 23:40:46.145: INFO: stderr: "" Apr 30 23:40:46.145: INFO: stdout: "service/rm2 exposed\n" Apr 30 23:40:46.200: INFO: Service rm2 in namespace kubectl-4611 found. STEP: exposing service Apr 30 23:40:48.208: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4611' Apr 30 23:40:48.400: INFO: stderr: "" Apr 30 23:40:48.400: INFO: stdout: "service/rm3 exposed\n" Apr 30 23:40:48.424: INFO: Service rm3 in namespace kubectl-4611 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:40:50.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4611" for this suite. • [SLOW TEST:10.756 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":290,"completed":29,"skipped":615,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:40:50.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Apr 30 23:40:55.118: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1118 pod-service-account-3b8ca64c-e7be-4bc8-bd7f-c4154b2cc3c1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 30 23:40:55.350: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1118 pod-service-account-3b8ca64c-e7be-4bc8-bd7f-c4154b2cc3c1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 30 23:40:55.545: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1118 pod-service-account-3b8ca64c-e7be-4bc8-bd7f-c4154b2cc3c1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:40:55.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1118" for this suite. • [SLOW TEST:5.387 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":290,"completed":30,"skipped":629,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:40:55.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 30 23:40:56.250: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 30 23:40:56.265: INFO: Waiting for terminating namespaces to be deleted... Apr 30 23:40:56.363: INFO: Logging pods the apiserver thinks is on node latest-worker before test Apr 30 23:40:56.370: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Apr 30 23:40:56.370: INFO: Container kindnet-cni ready: true, restart count 0 Apr 30 23:40:56.370: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Apr 30 23:40:56.370: INFO: Container kube-proxy ready: true, restart count 0 Apr 30 23:40:56.370: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Apr 30 23:40:56.376: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Apr 30 23:40:56.376: INFO: Container kindnet-cni ready: true, restart count 0 Apr 30 23:40:56.376: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Apr 30 23:40:56.376: INFO: Container kube-proxy ready: true, restart count 0 Apr 30 23:40:56.376: INFO: agnhost-master-b5mnf from kubectl-4611 started at 2020-04-30 23:40:42 +0000 UTC (1 container statuses recorded) Apr 30 23:40:56.376: INFO: Container agnhost-master ready: true, restart count 0 Apr 30 23:40:56.376: INFO: pod-service-account-3b8ca64c-e7be-4bc8-bd7f-c4154b2cc3c1 from svcaccounts-1118 started at 2020-04-30 23:40:51 +0000 UTC (1 container statuses recorded) Apr 30 23:40:56.376: INFO: Container test ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3ee431a5-90b0-49b0-98b9-cbc4e2d6c1e7 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-3ee431a5-90b0-49b0-98b9-cbc4e2d6c1e7 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-3ee431a5-90b0-49b0-98b9-cbc4e2d6c1e7 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:46:04.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-632" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.900 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":290,"completed":31,"skipped":633,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:46:04.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-4632 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4632 to expose endpoints map[] Apr 30 23:46:04.861: INFO: Get endpoints failed (13.403991ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 30 23:46:05.865: INFO: successfully validated that service multi-endpoint-test in namespace services-4632 exposes endpoints map[] (1.017845389s elapsed) STEP: Creating pod pod1 in namespace services-4632 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4632 to expose endpoints map[pod1:[100]] Apr 30 23:46:08.968: INFO: successfully validated that service multi-endpoint-test in namespace services-4632 exposes endpoints map[pod1:[100]] (3.094172117s elapsed) STEP: Creating pod pod2 in namespace services-4632 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4632 to expose endpoints map[pod1:[100] pod2:[101]] Apr 30 23:46:13.145: INFO: successfully validated that service multi-endpoint-test in namespace services-4632 exposes endpoints map[pod1:[100] pod2:[101]] (4.172998354s elapsed) STEP: Deleting pod pod1 in namespace services-4632 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4632 to expose endpoints map[pod2:[101]] Apr 30 23:46:13.210: INFO: successfully validated that service multi-endpoint-test in namespace services-4632 exposes endpoints map[pod2:[101]] (59.244142ms elapsed) STEP: Deleting pod pod2 in namespace services-4632 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4632 to expose endpoints map[] Apr 30 23:46:13.266: INFO: successfully validated that service multi-endpoint-test in namespace services-4632 exposes endpoints map[] (40.624873ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:46:13.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4632" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:8.915 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":290,"completed":32,"skipped":663,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:46:13.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0430 23:46:27.217501 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 30 23:46:27.217: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:46:27.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9570" for this suite. • [SLOW TEST:13.758 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":290,"completed":33,"skipped":694,"failed":0} [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:46:27.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2270 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2270;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2270 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2270;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2270.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2270.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2270.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2270.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2270.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2270.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2270.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2270.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2270.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2270.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2270.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2270.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2270.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 42.130.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.130.42_udp@PTR;check="$$(dig +tcp +noall +answer +search 42.130.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.130.42_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2270 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2270;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2270 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2270;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2270.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2270.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2270.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2270.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2270.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2270.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2270.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2270.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2270.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2270.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2270.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2270.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2270.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 42.130.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.130.42_udp@PTR;check="$$(dig +tcp +noall +answer +search 42.130.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.130.42_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 30 23:46:34.184: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:34.343: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:34.358: INFO: Unable to read wheezy_udp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:34.375: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:34.382: INFO: Unable to read wheezy_udp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:34.385: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:34.463: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:34.502: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:34.682: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:34.692: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:34.810: INFO: Unable to read jessie_udp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:34.837: INFO: Unable to read jessie_tcp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:34.990: INFO: Unable to read jessie_udp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:35.030: INFO: Unable to read jessie_tcp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:35.043: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:35.047: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:35.073: INFO: Lookups using dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2270 wheezy_tcp@dns-test-service.dns-2270 wheezy_udp@dns-test-service.dns-2270.svc wheezy_tcp@dns-test-service.dns-2270.svc wheezy_udp@_http._tcp.dns-test-service.dns-2270.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2270.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2270 jessie_tcp@dns-test-service.dns-2270 jessie_udp@dns-test-service.dns-2270.svc jessie_tcp@dns-test-service.dns-2270.svc jessie_udp@_http._tcp.dns-test-service.dns-2270.svc jessie_tcp@_http._tcp.dns-test-service.dns-2270.svc] Apr 30 23:46:40.078: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:40.083: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:40.086: INFO: Unable to read wheezy_udp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:40.088: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:40.091: INFO: Unable to read wheezy_udp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:40.094: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:40.097: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:40.100: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:40.121: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:40.123: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:40.126: INFO: Unable to read jessie_udp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:40.128: INFO: Unable to read jessie_tcp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:40.131: INFO: Unable to read jessie_udp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:40.134: INFO: Unable to read jessie_tcp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:40.137: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:40.141: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:40.159: INFO: Lookups using dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2270 wheezy_tcp@dns-test-service.dns-2270 wheezy_udp@dns-test-service.dns-2270.svc wheezy_tcp@dns-test-service.dns-2270.svc wheezy_udp@_http._tcp.dns-test-service.dns-2270.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2270.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2270 jessie_tcp@dns-test-service.dns-2270 jessie_udp@dns-test-service.dns-2270.svc jessie_tcp@dns-test-service.dns-2270.svc jessie_udp@_http._tcp.dns-test-service.dns-2270.svc jessie_tcp@_http._tcp.dns-test-service.dns-2270.svc] Apr 30 23:46:45.078: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:45.082: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:45.085: INFO: Unable to read wheezy_udp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:45.089: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:45.092: INFO: Unable to read wheezy_udp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:45.096: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:45.099: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:45.103: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:45.144: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:45.147: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:45.148: INFO: Unable to read jessie_udp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:45.151: INFO: Unable to read jessie_tcp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:45.153: INFO: Unable to read jessie_udp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:45.155: INFO: Unable to read jessie_tcp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:45.158: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:45.160: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:45.174: INFO: Lookups using dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2270 wheezy_tcp@dns-test-service.dns-2270 wheezy_udp@dns-test-service.dns-2270.svc wheezy_tcp@dns-test-service.dns-2270.svc wheezy_udp@_http._tcp.dns-test-service.dns-2270.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2270.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2270 jessie_tcp@dns-test-service.dns-2270 jessie_udp@dns-test-service.dns-2270.svc jessie_tcp@dns-test-service.dns-2270.svc jessie_udp@_http._tcp.dns-test-service.dns-2270.svc jessie_tcp@_http._tcp.dns-test-service.dns-2270.svc] Apr 30 23:46:50.079: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:50.083: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:50.086: INFO: Unable to read wheezy_udp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:50.090: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:50.093: INFO: Unable to read wheezy_udp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:50.096: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:50.100: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:50.103: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:50.128: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:50.131: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:50.133: INFO: Unable to read jessie_udp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:50.136: INFO: Unable to read jessie_tcp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:50.138: INFO: Unable to read jessie_udp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:50.141: INFO: Unable to read jessie_tcp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:50.144: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:50.147: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:50.164: INFO: Lookups using dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2270 wheezy_tcp@dns-test-service.dns-2270 wheezy_udp@dns-test-service.dns-2270.svc wheezy_tcp@dns-test-service.dns-2270.svc wheezy_udp@_http._tcp.dns-test-service.dns-2270.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2270.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2270 jessie_tcp@dns-test-service.dns-2270 jessie_udp@dns-test-service.dns-2270.svc jessie_tcp@dns-test-service.dns-2270.svc jessie_udp@_http._tcp.dns-test-service.dns-2270.svc jessie_tcp@_http._tcp.dns-test-service.dns-2270.svc] Apr 30 23:46:55.080: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:55.084: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:55.104: INFO: Unable to read wheezy_udp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:55.107: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:55.110: INFO: Unable to read wheezy_udp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:55.113: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:55.116: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:55.119: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:55.143: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:55.147: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:55.150: INFO: Unable to read jessie_udp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:55.153: INFO: Unable to read jessie_tcp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:55.182: INFO: Unable to read jessie_udp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:55.185: INFO: Unable to read jessie_tcp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:55.188: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:55.191: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:46:55.210: INFO: Lookups using dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2270 wheezy_tcp@dns-test-service.dns-2270 wheezy_udp@dns-test-service.dns-2270.svc wheezy_tcp@dns-test-service.dns-2270.svc wheezy_udp@_http._tcp.dns-test-service.dns-2270.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2270.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2270 jessie_tcp@dns-test-service.dns-2270 jessie_udp@dns-test-service.dns-2270.svc jessie_tcp@dns-test-service.dns-2270.svc jessie_udp@_http._tcp.dns-test-service.dns-2270.svc jessie_tcp@_http._tcp.dns-test-service.dns-2270.svc] Apr 30 23:47:00.078: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:47:00.082: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:47:00.085: INFO: Unable to read wheezy_udp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:47:00.087: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:47:00.091: INFO: Unable to read wheezy_udp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:47:00.093: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:47:00.095: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:47:00.098: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:47:00.120: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:47:00.123: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:47:00.127: INFO: Unable to read jessie_udp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:47:00.130: INFO: Unable to read jessie_tcp@dns-test-service.dns-2270 from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:47:00.132: INFO: Unable to read jessie_udp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:47:00.135: INFO: Unable to read jessie_tcp@dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:47:00.138: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:47:00.141: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2270.svc from pod dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18: the server could not find the requested resource (get pods dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18) Apr 30 23:47:00.160: INFO: Lookups using dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2270 wheezy_tcp@dns-test-service.dns-2270 wheezy_udp@dns-test-service.dns-2270.svc wheezy_tcp@dns-test-service.dns-2270.svc wheezy_udp@_http._tcp.dns-test-service.dns-2270.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2270.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2270 jessie_tcp@dns-test-service.dns-2270 jessie_udp@dns-test-service.dns-2270.svc jessie_tcp@dns-test-service.dns-2270.svc jessie_udp@_http._tcp.dns-test-service.dns-2270.svc jessie_tcp@_http._tcp.dns-test-service.dns-2270.svc] Apr 30 23:47:06.747: INFO: DNS probes using dns-2270/dns-test-dc13b7c4-4386-414c-92a0-82b08e409f18 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:47:07.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2270" for this suite. • [SLOW TEST:40.387 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":290,"completed":34,"skipped":694,"failed":0} SSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:47:07.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 30 23:47:08.018: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:47:24.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9042" for this suite. • [SLOW TEST:17.062 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":290,"completed":35,"skipped":698,"failed":0} [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:47:24.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 30 23:47:29.020: INFO: &Pod{ObjectMeta:{send-events-6b85520b-563a-41b5-bd0a-2cf1357f8555 events-3659 /api/v1/namespaces/events-3659/pods/send-events-6b85520b-563a-41b5-bd0a-2cf1357f8555 3b4c2762-55b2-43d8-92eb-a42747cb2a8d 443823 0 2020-04-30 23:47:24 +0000 UTC map[name:foo time:987743385] map[] [] [] [{e2e.test Update v1 2020-04-30 23:47:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-30 23:47:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.251\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7nh6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7nh6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7nh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-30 23:47:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-30 23:47:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-30 23:47:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-30 23:47:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.251,StartTime:2020-04-30 23:47:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-30 23:47:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://92e7c1015b36483638a4850b69700948ea9486f269d50170b512ccedcaee8e40,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.251,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 30 23:47:31.025: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 30 23:47:33.030: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:47:33.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3659" for this suite. • [SLOW TEST:8.226 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":290,"completed":36,"skipped":698,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:47:33.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 30 23:47:33.197: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 30 23:47:33.210: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:33.212: INFO: Number of nodes with available pods: 0 Apr 30 23:47:33.212: INFO: Node latest-worker is running more than one daemon pod Apr 30 23:47:34.217: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:34.221: INFO: Number of nodes with available pods: 0 Apr 30 23:47:34.221: INFO: Node latest-worker is running more than one daemon pod Apr 30 23:47:35.218: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:35.222: INFO: Number of nodes with available pods: 0 Apr 30 23:47:35.222: INFO: Node latest-worker is running more than one daemon pod Apr 30 23:47:36.274: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:36.277: INFO: Number of nodes with available pods: 0 Apr 30 23:47:36.277: INFO: Node latest-worker is running more than one daemon pod Apr 30 23:47:37.218: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:37.222: INFO: Number of nodes with available pods: 0 Apr 30 23:47:37.222: INFO: Node latest-worker is running more than one daemon pod Apr 30 23:47:38.216: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:38.219: INFO: Number of nodes with available pods: 2 Apr 30 23:47:38.219: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 30 23:47:38.309: INFO: Wrong image for pod: daemon-set-2fr4q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:38.309: INFO: Wrong image for pod: daemon-set-rlkxf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:38.332: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:39.336: INFO: Wrong image for pod: daemon-set-2fr4q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:39.336: INFO: Wrong image for pod: daemon-set-rlkxf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:39.340: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:40.361: INFO: Wrong image for pod: daemon-set-2fr4q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:40.361: INFO: Wrong image for pod: daemon-set-rlkxf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:40.365: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:41.338: INFO: Wrong image for pod: daemon-set-2fr4q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:41.338: INFO: Wrong image for pod: daemon-set-rlkxf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:41.342: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:42.336: INFO: Wrong image for pod: daemon-set-2fr4q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:42.337: INFO: Pod daemon-set-2fr4q is not available Apr 30 23:47:42.337: INFO: Wrong image for pod: daemon-set-rlkxf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:42.341: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:43.337: INFO: Wrong image for pod: daemon-set-2fr4q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:43.337: INFO: Pod daemon-set-2fr4q is not available Apr 30 23:47:43.337: INFO: Wrong image for pod: daemon-set-rlkxf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:43.342: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:44.343: INFO: Wrong image for pod: daemon-set-2fr4q. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:44.343: INFO: Pod daemon-set-2fr4q is not available Apr 30 23:47:44.343: INFO: Wrong image for pod: daemon-set-rlkxf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:44.353: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:45.348: INFO: Wrong image for pod: daemon-set-rlkxf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:45.348: INFO: Pod daemon-set-wkzxp is not available Apr 30 23:47:45.352: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:46.337: INFO: Wrong image for pod: daemon-set-rlkxf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:46.337: INFO: Pod daemon-set-wkzxp is not available Apr 30 23:47:46.343: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:47.337: INFO: Wrong image for pod: daemon-set-rlkxf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:47.337: INFO: Pod daemon-set-wkzxp is not available Apr 30 23:47:47.342: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:48.338: INFO: Wrong image for pod: daemon-set-rlkxf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:48.338: INFO: Pod daemon-set-wkzxp is not available Apr 30 23:47:48.343: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:49.336: INFO: Wrong image for pod: daemon-set-rlkxf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:49.340: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:50.337: INFO: Wrong image for pod: daemon-set-rlkxf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:50.337: INFO: Pod daemon-set-rlkxf is not available Apr 30 23:47:50.342: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:51.362: INFO: Wrong image for pod: daemon-set-rlkxf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Apr 30 23:47:51.362: INFO: Pod daemon-set-rlkxf is not available Apr 30 23:47:51.378: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:52.338: INFO: Pod daemon-set-zj2kw is not available Apr 30 23:47:52.342: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 30 23:47:52.345: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:52.348: INFO: Number of nodes with available pods: 1 Apr 30 23:47:52.348: INFO: Node latest-worker is running more than one daemon pod Apr 30 23:47:53.354: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:53.358: INFO: Number of nodes with available pods: 1 Apr 30 23:47:53.358: INFO: Node latest-worker is running more than one daemon pod Apr 30 23:47:54.353: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 30 23:47:54.357: INFO: Number of nodes with available pods: 2 Apr 30 23:47:54.357: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2131, will wait for the garbage collector to delete the pods Apr 30 23:47:54.452: INFO: Deleting DaemonSet.extensions daemon-set took: 6.386679ms Apr 30 23:47:54.852: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.290513ms Apr 30 23:48:05.366: INFO: Number of nodes with available pods: 0 Apr 30 23:48:05.366: INFO: Number of running nodes: 0, number of available pods: 0 Apr 30 23:48:05.368: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2131/daemonsets","resourceVersion":"444029"},"items":null} Apr 30 23:48:05.372: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2131/pods","resourceVersion":"444030"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:48:05.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2131" for this suite. • [SLOW TEST:32.308 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":290,"completed":37,"skipped":711,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:48:05.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 30 23:48:05.440: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 30 23:48:07.405: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8666 create -f -' Apr 30 23:48:10.614: INFO: stderr: "" Apr 30 23:48:10.614: INFO: stdout: "e2e-test-crd-publish-openapi-4440-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 30 23:48:10.614: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8666 delete e2e-test-crd-publish-openapi-4440-crds test-cr' Apr 30 23:48:10.729: INFO: stderr: "" Apr 30 23:48:10.730: INFO: stdout: "e2e-test-crd-publish-openapi-4440-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 30 23:48:10.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8666 apply -f -' Apr 30 23:48:11.137: INFO: stderr: "" Apr 30 23:48:11.138: INFO: stdout: "e2e-test-crd-publish-openapi-4440-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 30 23:48:11.138: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8666 delete e2e-test-crd-publish-openapi-4440-crds test-cr' Apr 30 23:48:11.257: INFO: stderr: "" Apr 30 23:48:11.258: INFO: stdout: "e2e-test-crd-publish-openapi-4440-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 30 23:48:11.258: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4440-crds' Apr 30 23:48:11.502: INFO: stderr: "" Apr 30 23:48:11.502: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4440-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:48:14.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8666" for this suite. • [SLOW TEST:9.096 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":290,"completed":38,"skipped":749,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:48:14.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-0dbaf2ce-d4dc-4fcd-a97d-d39b94443153 in namespace container-probe-6049 Apr 30 23:48:18.583: INFO: Started pod liveness-0dbaf2ce-d4dc-4fcd-a97d-d39b94443153 in namespace container-probe-6049 STEP: checking the pod's current state and verifying that restartCount is present Apr 30 23:48:18.586: INFO: Initial restart count of pod liveness-0dbaf2ce-d4dc-4fcd-a97d-d39b94443153 is 0 Apr 30 23:48:32.619: INFO: Restart count of pod container-probe-6049/liveness-0dbaf2ce-d4dc-4fcd-a97d-d39b94443153 is now 1 (14.033251313s elapsed) Apr 30 23:48:52.701: INFO: Restart count of pod container-probe-6049/liveness-0dbaf2ce-d4dc-4fcd-a97d-d39b94443153 is now 2 (34.114494879s elapsed) Apr 30 23:49:12.743: INFO: Restart count of pod container-probe-6049/liveness-0dbaf2ce-d4dc-4fcd-a97d-d39b94443153 is now 3 (54.157408785s elapsed) Apr 30 23:49:32.788: INFO: Restart count of pod container-probe-6049/liveness-0dbaf2ce-d4dc-4fcd-a97d-d39b94443153 is now 4 (1m14.202175912s elapsed) Apr 30 23:50:36.991: INFO: Restart count of pod container-probe-6049/liveness-0dbaf2ce-d4dc-4fcd-a97d-d39b94443153 is now 5 (2m18.404438118s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:50:37.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6049" for this suite. • [SLOW TEST:142.558 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":290,"completed":39,"skipped":763,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:50:37.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 30 23:50:37.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7317' Apr 30 23:50:37.260: INFO: stderr: "" Apr 30 23:50:37.260: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 30 23:50:42.311: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7317 -o json' Apr 30 23:50:42.422: INFO: stderr: "" Apr 30 23:50:42.422: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-30T23:50:37Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-04-30T23:50:37Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.2\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-04-30T23:50:39Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7317\",\n \"resourceVersion\": \"444585\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7317/pods/e2e-test-httpd-pod\",\n \"uid\": \"df633d3a-46d2-457c-b9a8-e3bb37bcafd9\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-bdnzb\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-bdnzb\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-bdnzb\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-30T23:50:37Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-30T23:50:39Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-30T23:50:39Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-30T23:50:37Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://3f6e2bf5b101ce9e47b55f1fd9e0427c33a8c66baa82cb3bc993dcf2ab338590\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-30T23:50:39Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.2\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.2\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-30T23:50:37Z\"\n }\n}\n" STEP: replace the image in the pod Apr 30 23:50:42.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7317' Apr 30 23:50:42.756: INFO: stderr: "" Apr 30 23:50:42.756: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 Apr 30 23:50:42.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7317' Apr 30 23:50:45.964: INFO: stderr: "" Apr 30 23:50:45.964: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:50:45.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7317" for this suite. • [SLOW TEST:8.970 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":290,"completed":40,"skipped":777,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:50:46.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 30 23:50:46.145: INFO: Waiting up to 5m0s for pod "downwardapi-volume-93bdaaf6-cd2b-4273-b7f5-7b03deda4910" in namespace "downward-api-7936" to be "Succeeded or Failed" Apr 30 23:50:46.164: INFO: Pod "downwardapi-volume-93bdaaf6-cd2b-4273-b7f5-7b03deda4910": Phase="Pending", Reason="", readiness=false. Elapsed: 18.746259ms Apr 30 23:50:48.198: INFO: Pod "downwardapi-volume-93bdaaf6-cd2b-4273-b7f5-7b03deda4910": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053097346s Apr 30 23:50:50.202: INFO: Pod "downwardapi-volume-93bdaaf6-cd2b-4273-b7f5-7b03deda4910": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057326072s STEP: Saw pod success Apr 30 23:50:50.202: INFO: Pod "downwardapi-volume-93bdaaf6-cd2b-4273-b7f5-7b03deda4910" satisfied condition "Succeeded or Failed" Apr 30 23:50:50.205: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-93bdaaf6-cd2b-4273-b7f5-7b03deda4910 container client-container: STEP: delete the pod Apr 30 23:50:50.250: INFO: Waiting for pod downwardapi-volume-93bdaaf6-cd2b-4273-b7f5-7b03deda4910 to disappear Apr 30 23:50:50.256: INFO: Pod downwardapi-volume-93bdaaf6-cd2b-4273-b7f5-7b03deda4910 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:50:50.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7936" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":290,"completed":41,"skipped":784,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:50:50.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 30 23:50:50.871: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 30 23:50:52.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723887450, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723887450, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723887450, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723887450, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 30 23:50:55.955: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 30 23:50:55.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:50:57.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3834" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.480 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":290,"completed":42,"skipped":792,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:50:57.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6195 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-6195 Apr 30 23:50:57.932: INFO: Found 0 stateful pods, waiting for 1 Apr 30 23:51:07.938: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Apr 30 23:51:07.997: INFO: Deleting all statefulset in ns statefulset-6195 Apr 30 23:51:08.001: INFO: Scaling statefulset ss to 0 Apr 30 23:51:28.087: INFO: Waiting for statefulset status.replicas updated to 0 Apr 30 23:51:28.090: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:51:28.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6195" for this suite. • [SLOW TEST:30.324 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":290,"completed":43,"skipped":805,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:51:28.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-516 STEP: creating service affinity-nodeport-transition in namespace services-516 STEP: creating replication controller affinity-nodeport-transition in namespace services-516 I0430 23:51:28.351367 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-516, replica count: 3 I0430 23:51:31.401835 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0430 23:51:34.402125 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 30 23:51:34.413: INFO: Creating new exec pod Apr 30 23:51:39.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-516 execpod-affinityr4pnh -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Apr 30 23:51:39.681: INFO: stderr: "I0430 23:51:39.578990 396 log.go:172] (0xc000ad46e0) (0xc0006df040) Create stream\nI0430 23:51:39.579078 396 log.go:172] (0xc000ad46e0) (0xc0006df040) Stream added, broadcasting: 1\nI0430 23:51:39.584591 396 log.go:172] (0xc000ad46e0) Reply frame received for 1\nI0430 23:51:39.584639 396 log.go:172] (0xc000ad46e0) (0xc0006ec000) Create stream\nI0430 23:51:39.584651 396 log.go:172] (0xc000ad46e0) (0xc0006ec000) Stream added, broadcasting: 3\nI0430 23:51:39.586031 396 log.go:172] (0xc000ad46e0) Reply frame received for 3\nI0430 23:51:39.586066 396 log.go:172] (0xc000ad46e0) (0xc0006ecfa0) Create stream\nI0430 23:51:39.586081 396 log.go:172] (0xc000ad46e0) (0xc0006ecfa0) Stream added, broadcasting: 5\nI0430 23:51:39.587370 396 log.go:172] (0xc000ad46e0) Reply frame received for 5\nI0430 23:51:39.673697 396 log.go:172] (0xc000ad46e0) Data frame received for 5\nI0430 23:51:39.673717 396 log.go:172] (0xc0006ecfa0) (5) Data frame handling\nI0430 23:51:39.673728 396 log.go:172] (0xc0006ecfa0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0430 23:51:39.674475 396 log.go:172] (0xc000ad46e0) Data frame received for 5\nI0430 23:51:39.674500 396 log.go:172] (0xc0006ecfa0) (5) Data frame handling\nI0430 23:51:39.674512 396 log.go:172] (0xc0006ecfa0) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0430 23:51:39.674970 396 log.go:172] (0xc000ad46e0) Data frame received for 5\nI0430 23:51:39.674988 396 log.go:172] (0xc0006ecfa0) (5) Data frame handling\nI0430 23:51:39.675016 396 log.go:172] (0xc000ad46e0) Data frame received for 3\nI0430 23:51:39.675034 396 log.go:172] (0xc0006ec000) (3) Data frame handling\nI0430 23:51:39.676899 396 log.go:172] (0xc000ad46e0) Data frame received for 1\nI0430 23:51:39.676912 396 log.go:172] (0xc0006df040) (1) Data frame handling\nI0430 23:51:39.676919 396 log.go:172] (0xc0006df040) (1) Data frame sent\nI0430 23:51:39.676967 396 log.go:172] (0xc000ad46e0) (0xc0006df040) Stream removed, broadcasting: 1\nI0430 23:51:39.677318 396 log.go:172] (0xc000ad46e0) Go away received\nI0430 23:51:39.677514 396 log.go:172] (0xc000ad46e0) (0xc0006df040) Stream removed, broadcasting: 1\nI0430 23:51:39.677532 396 log.go:172] (0xc000ad46e0) (0xc0006ec000) Stream removed, broadcasting: 3\nI0430 23:51:39.677547 396 log.go:172] (0xc000ad46e0) (0xc0006ecfa0) Stream removed, broadcasting: 5\n" Apr 30 23:51:39.682: INFO: stdout: "" Apr 30 23:51:39.683: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-516 execpod-affinityr4pnh -- /bin/sh -x -c nc -zv -t -w 2 10.99.0.23 80' Apr 30 23:51:39.907: INFO: stderr: "I0430 23:51:39.821403 418 log.go:172] (0xc0006f3ad0) (0xc000b385a0) Create stream\nI0430 23:51:39.821468 418 log.go:172] (0xc0006f3ad0) (0xc000b385a0) Stream added, broadcasting: 1\nI0430 23:51:39.825399 418 log.go:172] (0xc0006f3ad0) Reply frame received for 1\nI0430 23:51:39.825440 418 log.go:172] (0xc0006f3ad0) (0xc000710d20) Create stream\nI0430 23:51:39.825453 418 log.go:172] (0xc0006f3ad0) (0xc000710d20) Stream added, broadcasting: 3\nI0430 23:51:39.826434 418 log.go:172] (0xc0006f3ad0) Reply frame received for 3\nI0430 23:51:39.826489 418 log.go:172] (0xc0006f3ad0) (0xc00051adc0) Create stream\nI0430 23:51:39.826507 418 log.go:172] (0xc0006f3ad0) (0xc00051adc0) Stream added, broadcasting: 5\nI0430 23:51:39.827443 418 log.go:172] (0xc0006f3ad0) Reply frame received for 5\nI0430 23:51:39.901489 418 log.go:172] (0xc0006f3ad0) Data frame received for 5\nI0430 23:51:39.901535 418 log.go:172] (0xc00051adc0) (5) Data frame handling\nI0430 23:51:39.901556 418 log.go:172] (0xc00051adc0) (5) Data frame sent\nI0430 23:51:39.901573 418 log.go:172] (0xc0006f3ad0) Data frame received for 5\nI0430 23:51:39.901585 418 log.go:172] (0xc00051adc0) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.0.23 80\nConnection to 10.99.0.23 80 port [tcp/http] succeeded!\nI0430 23:51:39.901635 418 log.go:172] (0xc0006f3ad0) Data frame received for 3\nI0430 23:51:39.901662 418 log.go:172] (0xc000710d20) (3) Data frame handling\nI0430 23:51:39.903028 418 log.go:172] (0xc0006f3ad0) Data frame received for 1\nI0430 23:51:39.903051 418 log.go:172] (0xc000b385a0) (1) Data frame handling\nI0430 23:51:39.903063 418 log.go:172] (0xc000b385a0) (1) Data frame sent\nI0430 23:51:39.903075 418 log.go:172] (0xc0006f3ad0) (0xc000b385a0) Stream removed, broadcasting: 1\nI0430 23:51:39.903088 418 log.go:172] (0xc0006f3ad0) Go away received\nI0430 23:51:39.903447 418 log.go:172] (0xc0006f3ad0) (0xc000b385a0) Stream removed, broadcasting: 1\nI0430 23:51:39.903470 418 log.go:172] (0xc0006f3ad0) (0xc000710d20) Stream removed, broadcasting: 3\nI0430 23:51:39.903485 418 log.go:172] (0xc0006f3ad0) (0xc00051adc0) Stream removed, broadcasting: 5\n" Apr 30 23:51:39.907: INFO: stdout: "" Apr 30 23:51:39.907: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-516 execpod-affinityr4pnh -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32595' Apr 30 23:51:40.122: INFO: stderr: "I0430 23:51:40.047368 440 log.go:172] (0xc0009f7970) (0xc000b6a5a0) Create stream\nI0430 23:51:40.047436 440 log.go:172] (0xc0009f7970) (0xc000b6a5a0) Stream added, broadcasting: 1\nI0430 23:51:40.052824 440 log.go:172] (0xc0009f7970) Reply frame received for 1\nI0430 23:51:40.052877 440 log.go:172] (0xc0009f7970) (0xc00070c5a0) Create stream\nI0430 23:51:40.052897 440 log.go:172] (0xc0009f7970) (0xc00070c5a0) Stream added, broadcasting: 3\nI0430 23:51:40.054234 440 log.go:172] (0xc0009f7970) Reply frame received for 3\nI0430 23:51:40.054273 440 log.go:172] (0xc0009f7970) (0xc00070caa0) Create stream\nI0430 23:51:40.054283 440 log.go:172] (0xc0009f7970) (0xc00070caa0) Stream added, broadcasting: 5\nI0430 23:51:40.055697 440 log.go:172] (0xc0009f7970) Reply frame received for 5\nI0430 23:51:40.116545 440 log.go:172] (0xc0009f7970) Data frame received for 3\nI0430 23:51:40.116598 440 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0430 23:51:40.116635 440 log.go:172] (0xc0009f7970) Data frame received for 5\nI0430 23:51:40.116655 440 log.go:172] (0xc00070caa0) (5) Data frame handling\nI0430 23:51:40.116668 440 log.go:172] (0xc00070caa0) (5) Data frame sent\nI0430 23:51:40.116680 440 log.go:172] (0xc0009f7970) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.13 32595\nConnection to 172.17.0.13 32595 port [tcp/32595] succeeded!\nI0430 23:51:40.116691 440 log.go:172] (0xc00070caa0) (5) Data frame handling\nI0430 23:51:40.118379 440 log.go:172] (0xc0009f7970) Data frame received for 1\nI0430 23:51:40.118428 440 log.go:172] (0xc000b6a5a0) (1) Data frame handling\nI0430 23:51:40.118450 440 log.go:172] (0xc000b6a5a0) (1) Data frame sent\nI0430 23:51:40.118473 440 log.go:172] (0xc0009f7970) (0xc000b6a5a0) Stream removed, broadcasting: 1\nI0430 23:51:40.118503 440 log.go:172] (0xc0009f7970) Go away received\nI0430 23:51:40.118871 440 log.go:172] (0xc0009f7970) (0xc000b6a5a0) Stream removed, broadcasting: 1\nI0430 23:51:40.118893 440 log.go:172] (0xc0009f7970) (0xc00070c5a0) Stream removed, broadcasting: 3\nI0430 23:51:40.118903 440 log.go:172] (0xc0009f7970) (0xc00070caa0) Stream removed, broadcasting: 5\n" Apr 30 23:51:40.122: INFO: stdout: "" Apr 30 23:51:40.123: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-516 execpod-affinityr4pnh -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32595' Apr 30 23:51:40.327: INFO: stderr: "I0430 23:51:40.255893 461 log.go:172] (0xc0000e0790) (0xc000676e60) Create stream\nI0430 23:51:40.255950 461 log.go:172] (0xc0000e0790) (0xc000676e60) Stream added, broadcasting: 1\nI0430 23:51:40.259094 461 log.go:172] (0xc0000e0790) Reply frame received for 1\nI0430 23:51:40.259141 461 log.go:172] (0xc0000e0790) (0xc0006545a0) Create stream\nI0430 23:51:40.259154 461 log.go:172] (0xc0000e0790) (0xc0006545a0) Stream added, broadcasting: 3\nI0430 23:51:40.260101 461 log.go:172] (0xc0000e0790) Reply frame received for 3\nI0430 23:51:40.260154 461 log.go:172] (0xc0000e0790) (0xc00060ac80) Create stream\nI0430 23:51:40.260195 461 log.go:172] (0xc0000e0790) (0xc00060ac80) Stream added, broadcasting: 5\nI0430 23:51:40.261683 461 log.go:172] (0xc0000e0790) Reply frame received for 5\nI0430 23:51:40.320661 461 log.go:172] (0xc0000e0790) Data frame received for 5\nI0430 23:51:40.320683 461 log.go:172] (0xc00060ac80) (5) Data frame handling\nI0430 23:51:40.320690 461 log.go:172] (0xc00060ac80) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 32595\nI0430 23:51:40.320911 461 log.go:172] (0xc0000e0790) Data frame received for 5\nI0430 23:51:40.320922 461 log.go:172] (0xc00060ac80) (5) Data frame handling\nI0430 23:51:40.320928 461 log.go:172] (0xc00060ac80) (5) Data frame sent\nConnection to 172.17.0.12 32595 port [tcp/32595] succeeded!\nI0430 23:51:40.321569 461 log.go:172] (0xc0000e0790) Data frame received for 3\nI0430 23:51:40.321581 461 log.go:172] (0xc0006545a0) (3) Data frame handling\nI0430 23:51:40.321720 461 log.go:172] (0xc0000e0790) Data frame received for 5\nI0430 23:51:40.321748 461 log.go:172] (0xc00060ac80) (5) Data frame handling\nI0430 23:51:40.322836 461 log.go:172] (0xc0000e0790) Data frame received for 1\nI0430 23:51:40.322845 461 log.go:172] (0xc000676e60) (1) Data frame handling\nI0430 23:51:40.322852 461 log.go:172] (0xc000676e60) (1) Data frame sent\nI0430 23:51:40.322955 461 log.go:172] (0xc0000e0790) (0xc000676e60) Stream removed, broadcasting: 1\nI0430 23:51:40.323005 461 log.go:172] (0xc0000e0790) Go away received\nI0430 23:51:40.323230 461 log.go:172] (0xc0000e0790) (0xc000676e60) Stream removed, broadcasting: 1\nI0430 23:51:40.323243 461 log.go:172] (0xc0000e0790) (0xc0006545a0) Stream removed, broadcasting: 3\nI0430 23:51:40.323248 461 log.go:172] (0xc0000e0790) (0xc00060ac80) Stream removed, broadcasting: 5\n" Apr 30 23:51:40.327: INFO: stdout: "" Apr 30 23:51:40.355: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-516 execpod-affinityr4pnh -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32595/ ; done' Apr 30 23:51:40.673: INFO: stderr: "I0430 23:51:40.511749 482 log.go:172] (0xc00003b550) (0xc000910820) Create stream\nI0430 23:51:40.511830 482 log.go:172] (0xc00003b550) (0xc000910820) Stream added, broadcasting: 1\nI0430 23:51:40.514896 482 log.go:172] (0xc00003b550) Reply frame received for 1\nI0430 23:51:40.514934 482 log.go:172] (0xc00003b550) (0xc000910d20) Create stream\nI0430 23:51:40.514951 482 log.go:172] (0xc00003b550) (0xc000910d20) Stream added, broadcasting: 3\nI0430 23:51:40.515872 482 log.go:172] (0xc00003b550) Reply frame received for 3\nI0430 23:51:40.515910 482 log.go:172] (0xc00003b550) (0xc0008fc320) Create stream\nI0430 23:51:40.515925 482 log.go:172] (0xc00003b550) (0xc0008fc320) Stream added, broadcasting: 5\nI0430 23:51:40.516752 482 log.go:172] (0xc00003b550) Reply frame received for 5\nI0430 23:51:40.592179 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.592230 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.592247 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.592270 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.592280 482 log.go:172] (0xc0008fc320) (5) Data frame handling\nI0430 23:51:40.592297 482 log.go:172] (0xc0008fc320) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.596270 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.596291 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.596313 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.596700 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.596711 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.596717 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.596746 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.596770 482 log.go:172] (0xc0008fc320) (5) Data frame handling\nI0430 23:51:40.596790 482 log.go:172] (0xc0008fc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.602914 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.602945 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.602972 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.604059 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.604080 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.604098 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.604114 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.604126 482 log.go:172] (0xc0008fc320) (5) Data frame handling\nI0430 23:51:40.604135 482 log.go:172] (0xc0008fc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.608697 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.608712 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.608722 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.608924 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.608941 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.608948 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.608958 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.608964 482 log.go:172] (0xc0008fc320) (5) Data frame handling\nI0430 23:51:40.608969 482 log.go:172] (0xc0008fc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.613649 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.613666 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.613681 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.614227 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.614245 482 log.go:172] (0xc0008fc320) (5) Data frame handling\nI0430 23:51:40.614261 482 log.go:172] (0xc0008fc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.614339 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.614361 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.614381 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.618410 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.618430 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.618444 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.618913 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.618940 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.618953 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.618970 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.618988 482 log.go:172] (0xc0008fc320) (5) Data frame handling\nI0430 23:51:40.618999 482 log.go:172] (0xc0008fc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.622112 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.622133 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.622149 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.622443 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.622467 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.622479 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.622495 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.622504 482 log.go:172] (0xc0008fc320) (5) Data frame handling\nI0430 23:51:40.622514 482 log.go:172] (0xc0008fc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.625994 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.626039 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.626064 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.626663 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.626699 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.626723 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.626760 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.626772 482 log.go:172] (0xc0008fc320) (5) Data frame handling\nI0430 23:51:40.626791 482 log.go:172] (0xc0008fc320) (5) Data frame sent\n+ echo\n+ curl -q -sI0430 23:51:40.626807 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.626851 482 log.go:172] (0xc0008fc320) (5) Data frame handling\nI0430 23:51:40.626881 482 log.go:172] (0xc0008fc320) (5) Data frame sent\n --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.630257 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.630283 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.630303 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.630768 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.630823 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.630846 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.630871 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.630883 482 log.go:172] (0xc0008fc320) (5) Data frame handling\nI0430 23:51:40.630903 482 log.go:172] (0xc0008fc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.635096 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.635110 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.635118 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.635511 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.635525 482 log.go:172] (0xc0008fc320) (5) Data frame handling\nI0430 23:51:40.635533 482 log.go:172] (0xc0008fc320) (5) Data frame sent\nI0430 23:51:40.635541 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.635547 482 log.go:172] (0xc0008fc320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.635558 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.635570 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.635578 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.635589 482 log.go:172] (0xc0008fc320) (5) Data frame sent\nI0430 23:51:40.639392 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.639407 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.639424 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.639800 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.639822 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.639832 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.639849 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.639858 482 log.go:172] (0xc0008fc320) (5) Data frame handling\nI0430 23:51:40.639866 482 log.go:172] (0xc0008fc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.643564 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.643580 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.643594 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.644047 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.644072 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.644083 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.644106 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.644113 482 log.go:172] (0xc0008fc320) (5) Data frame handling\nI0430 23:51:40.644139 482 log.go:172] (0xc0008fc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.648458 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.648490 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.648525 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.648830 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.648843 482 log.go:172] (0xc0008fc320) (5) Data frame handling\nI0430 23:51:40.648848 482 log.go:172] (0xc0008fc320) (5) Data frame sent\nI0430 23:51:40.648853 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.648859 482 log.go:172] (0xc0008fc320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.648872 482 log.go:172] (0xc0008fc320) (5) Data frame sent\nI0430 23:51:40.648891 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.648911 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.648930 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.652797 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.652817 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.652835 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.653312 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.653330 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.653351 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.653384 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.653399 482 log.go:172] (0xc0008fc320) (5) Data frame handling\nI0430 23:51:40.653415 482 log.go:172] (0xc0008fc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.657427 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.657448 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.657463 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.657935 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.657967 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.657984 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.658001 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.658013 482 log.go:172] (0xc0008fc320) (5) Data frame handling\nI0430 23:51:40.658029 482 log.go:172] (0xc0008fc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.661313 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.661343 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.661365 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.661643 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.661654 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.661659 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.661667 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.661671 482 log.go:172] (0xc0008fc320) (5) Data frame handling\nI0430 23:51:40.661676 482 log.go:172] (0xc0008fc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.664944 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.664962 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.664972 482 log.go:172] (0xc000910d20) (3) Data frame sent\nI0430 23:51:40.666445 482 log.go:172] (0xc00003b550) Data frame received for 3\nI0430 23:51:40.666473 482 log.go:172] (0xc000910d20) (3) Data frame handling\nI0430 23:51:40.666498 482 log.go:172] (0xc00003b550) Data frame received for 5\nI0430 23:51:40.666507 482 log.go:172] (0xc0008fc320) (5) Data frame handling\nI0430 23:51:40.668326 482 log.go:172] (0xc00003b550) Data frame received for 1\nI0430 23:51:40.668346 482 log.go:172] (0xc000910820) (1) Data frame handling\nI0430 23:51:40.668364 482 log.go:172] (0xc000910820) (1) Data frame sent\nI0430 23:51:40.668388 482 log.go:172] (0xc00003b550) (0xc000910820) Stream removed, broadcasting: 1\nI0430 23:51:40.668484 482 log.go:172] (0xc00003b550) Go away received\nI0430 23:51:40.668677 482 log.go:172] (0xc00003b550) (0xc000910820) Stream removed, broadcasting: 1\nI0430 23:51:40.668703 482 log.go:172] (0xc00003b550) (0xc000910d20) Stream removed, broadcasting: 3\nI0430 23:51:40.668716 482 log.go:172] (0xc00003b550) (0xc0008fc320) Stream removed, broadcasting: 5\n" Apr 30 23:51:40.674: INFO: stdout: "\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-nlsp9\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-nlsp9\naffinity-nodeport-transition-nlsp9\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-c9qgg\naffinity-nodeport-transition-nlsp9\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-c9qgg\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-nlsp9\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-ch79t" Apr 30 23:51:40.674: INFO: Received response from host: Apr 30 23:51:40.674: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.674: INFO: Received response from host: affinity-nodeport-transition-nlsp9 Apr 30 23:51:40.674: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.674: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.674: INFO: Received response from host: affinity-nodeport-transition-nlsp9 Apr 30 23:51:40.674: INFO: Received response from host: affinity-nodeport-transition-nlsp9 Apr 30 23:51:40.674: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.674: INFO: Received response from host: affinity-nodeport-transition-c9qgg Apr 30 23:51:40.674: INFO: Received response from host: affinity-nodeport-transition-nlsp9 Apr 30 23:51:40.674: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.674: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.674: INFO: Received response from host: affinity-nodeport-transition-c9qgg Apr 30 23:51:40.674: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.674: INFO: Received response from host: affinity-nodeport-transition-nlsp9 Apr 30 23:51:40.674: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.674: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.681: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-516 execpod-affinityr4pnh -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32595/ ; done' Apr 30 23:51:40.961: INFO: stderr: "I0430 23:51:40.812881 501 log.go:172] (0xc000ad58c0) (0xc000a18460) Create stream\nI0430 23:51:40.812934 501 log.go:172] (0xc000ad58c0) (0xc000a18460) Stream added, broadcasting: 1\nI0430 23:51:40.816451 501 log.go:172] (0xc000ad58c0) Reply frame received for 1\nI0430 23:51:40.816489 501 log.go:172] (0xc000ad58c0) (0xc0006090e0) Create stream\nI0430 23:51:40.816502 501 log.go:172] (0xc000ad58c0) (0xc0006090e0) Stream added, broadcasting: 3\nI0430 23:51:40.818197 501 log.go:172] (0xc000ad58c0) Reply frame received for 3\nI0430 23:51:40.818261 501 log.go:172] (0xc000ad58c0) (0xc00099a280) Create stream\nI0430 23:51:40.818280 501 log.go:172] (0xc000ad58c0) (0xc00099a280) Stream added, broadcasting: 5\nI0430 23:51:40.819437 501 log.go:172] (0xc000ad58c0) Reply frame received for 5\nI0430 23:51:40.870454 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.870492 501 log.go:172] (0xc00099a280) (5) Data frame handling\nI0430 23:51:40.870508 501 log.go:172] (0xc00099a280) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.870543 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.870556 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.870570 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.873769 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.873789 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.873807 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.874569 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.874597 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.874614 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.874645 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.874660 501 log.go:172] (0xc00099a280) (5) Data frame handling\nI0430 23:51:40.874673 501 log.go:172] (0xc00099a280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.880820 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.880900 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.881000 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.881521 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.881566 501 log.go:172] (0xc00099a280) (5) Data frame handling\nI0430 23:51:40.881588 501 log.go:172] (0xc00099a280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.881617 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.881635 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.881656 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.884294 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.884326 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.884356 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.884795 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.884810 501 log.go:172] (0xc00099a280) (5) Data frame handling\nI0430 23:51:40.884825 501 log.go:172] (0xc00099a280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.884861 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.884902 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.884938 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.888488 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.888508 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.888520 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.888807 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.888825 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.888837 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.888855 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.888865 501 log.go:172] (0xc00099a280) (5) Data frame handling\nI0430 23:51:40.888881 501 log.go:172] (0xc00099a280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.892837 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.892854 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.892869 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.893624 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.893641 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.893651 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.893695 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.893722 501 log.go:172] (0xc00099a280) (5) Data frame handling\nI0430 23:51:40.893758 501 log.go:172] (0xc00099a280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.897495 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.897516 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.897548 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.898158 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.898180 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.898193 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.898214 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.898234 501 log.go:172] (0xc00099a280) (5) Data frame handling\nI0430 23:51:40.898249 501 log.go:172] (0xc00099a280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.905735 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.905769 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.905801 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.906107 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.906130 501 log.go:172] (0xc00099a280) (5) Data frame handling\nI0430 23:51:40.906143 501 log.go:172] (0xc00099a280) (5) Data frame sent\nI0430 23:51:40.906151 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.906157 501 log.go:172] (0xc00099a280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.906170 501 log.go:172] (0xc00099a280) (5) Data frame sent\nI0430 23:51:40.906177 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.906182 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.906203 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.909895 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.909929 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.909966 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.910279 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.910296 501 log.go:172] (0xc00099a280) (5) Data frame handling\nI0430 23:51:40.910310 501 log.go:172] (0xc00099a280) (5) Data frame sent\nI0430 23:51:40.910316 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.910322 501 log.go:172] (0xc00099a280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.910335 501 log.go:172] (0xc00099a280) (5) Data frame sent\nI0430 23:51:40.910370 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.910403 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.910422 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.918208 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.918230 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.918246 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.918905 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.918933 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.918972 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.919014 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.919040 501 log.go:172] (0xc00099a280) (5) Data frame handling\nI0430 23:51:40.919064 501 log.go:172] (0xc00099a280) (5) Data frame sent\nI0430 23:51:40.919077 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.919090 501 log.go:172] (0xc00099a280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.919112 501 log.go:172] (0xc00099a280) (5) Data frame sent\nI0430 23:51:40.924083 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.924113 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.924133 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.924570 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.924602 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.924636 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.924661 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.924672 501 log.go:172] (0xc00099a280) (5) Data frame handling\nI0430 23:51:40.924688 501 log.go:172] (0xc00099a280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.928037 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.928063 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.928085 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.928333 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.928356 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.928365 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.928376 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.928382 501 log.go:172] (0xc00099a280) (5) Data frame handling\nI0430 23:51:40.928388 501 log.go:172] (0xc00099a280) (5) Data frame sent\nI0430 23:51:40.928411 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.928419 501 log.go:172] (0xc00099a280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.928435 501 log.go:172] (0xc00099a280) (5) Data frame sent\nI0430 23:51:40.934004 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.934018 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.934030 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.934651 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.934668 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.934677 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.934689 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.934705 501 log.go:172] (0xc00099a280) (5) Data frame handling\nI0430 23:51:40.934715 501 log.go:172] (0xc00099a280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.938968 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.938992 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.939010 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.939403 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.939432 501 log.go:172] (0xc00099a280) (5) Data frame handling\nI0430 23:51:40.939442 501 log.go:172] (0xc00099a280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.939454 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.939464 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.939474 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.943366 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.943407 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.943433 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.943942 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.943966 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.943995 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.944012 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.944028 501 log.go:172] (0xc00099a280) (5) Data frame handling\nI0430 23:51:40.944038 501 log.go:172] (0xc00099a280) (5) Data frame sent\nI0430 23:51:40.944050 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.944066 501 log.go:172] (0xc00099a280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.944089 501 log.go:172] (0xc00099a280) (5) Data frame sent\nI0430 23:51:40.948174 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.948211 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.948241 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.948577 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.948594 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.948615 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.948632 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.948652 501 log.go:172] (0xc00099a280) (5) Data frame handling\nI0430 23:51:40.948668 501 log.go:172] (0xc00099a280) (5) Data frame sent\nI0430 23:51:40.948680 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.948689 501 log.go:172] (0xc00099a280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32595/\nI0430 23:51:40.948726 501 log.go:172] (0xc00099a280) (5) Data frame sent\nI0430 23:51:40.953497 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.953517 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.953528 501 log.go:172] (0xc0006090e0) (3) Data frame sent\nI0430 23:51:40.954347 501 log.go:172] (0xc000ad58c0) Data frame received for 3\nI0430 23:51:40.954389 501 log.go:172] (0xc0006090e0) (3) Data frame handling\nI0430 23:51:40.954462 501 log.go:172] (0xc000ad58c0) Data frame received for 5\nI0430 23:51:40.954495 501 log.go:172] (0xc00099a280) (5) Data frame handling\nI0430 23:51:40.956231 501 log.go:172] (0xc000ad58c0) Data frame received for 1\nI0430 23:51:40.956251 501 log.go:172] (0xc000a18460) (1) Data frame handling\nI0430 23:51:40.956266 501 log.go:172] (0xc000a18460) (1) Data frame sent\nI0430 23:51:40.956387 501 log.go:172] (0xc000ad58c0) (0xc000a18460) Stream removed, broadcasting: 1\nI0430 23:51:40.956599 501 log.go:172] (0xc000ad58c0) Go away received\nI0430 23:51:40.956815 501 log.go:172] (0xc000ad58c0) (0xc000a18460) Stream removed, broadcasting: 1\nI0430 23:51:40.956837 501 log.go:172] (0xc000ad58c0) (0xc0006090e0) Stream removed, broadcasting: 3\nI0430 23:51:40.956852 501 log.go:172] (0xc000ad58c0) (0xc00099a280) Stream removed, broadcasting: 5\n" Apr 30 23:51:40.962: INFO: stdout: "\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-ch79t\naffinity-nodeport-transition-ch79t" Apr 30 23:51:40.962: INFO: Received response from host: Apr 30 23:51:40.962: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.962: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.962: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.962: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.962: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.962: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.962: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.962: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.962: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.962: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.962: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.962: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.962: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.962: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.962: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.962: INFO: Received response from host: affinity-nodeport-transition-ch79t Apr 30 23:51:40.962: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-516, will wait for the garbage collector to delete the pods Apr 30 23:51:41.078: INFO: Deleting ReplicationController affinity-nodeport-transition took: 20.38833ms Apr 30 23:51:41.478: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 400.344305ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:51:55.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-516" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:27.315 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":290,"completed":44,"skipped":824,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:51:55.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-0da7cbe2-0f95-4bfe-80b2-f05782a8bd70 in namespace container-probe-5063 Apr 30 23:51:59.589: INFO: Started pod liveness-0da7cbe2-0f95-4bfe-80b2-f05782a8bd70 in namespace container-probe-5063 STEP: checking the pod's current state and verifying that restartCount is present Apr 30 23:51:59.592: INFO: Initial restart count of pod liveness-0da7cbe2-0f95-4bfe-80b2-f05782a8bd70 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 30 23:56:00.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5063" for this suite. • [SLOW TEST:244.878 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":290,"completed":45,"skipped":863,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 30 23:56:00.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-7906b446-5ceb-4b74-ab2a-52df282f31af in namespace container-probe-6277 Apr 30 23:56:04.740: INFO: Started pod test-webserver-7906b446-5ceb-4b74-ab2a-52df282f31af in namespace container-probe-6277 STEP: checking the pod's current state and verifying that restartCount is present Apr 30 23:56:04.744: INFO: Initial restart count of pod test-webserver-7906b446-5ceb-4b74-ab2a-52df282f31af is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:00:05.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6277" for this suite. • [SLOW TEST:245.225 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":290,"completed":46,"skipped":875,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:00:05.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-2931 STEP: creating replication controller nodeport-test in namespace services-2931 I0501 00:00:06.062860 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-2931, replica count: 2 I0501 00:00:09.113470 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 00:00:12.113734 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 1 00:00:12.113: INFO: Creating new exec pod May 1 00:00:17.136: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2931 execpodpt4hn -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 1 00:00:20.324: INFO: stderr: "I0501 00:00:20.229620 518 log.go:172] (0xc0009d8c60) (0xc0005e8e60) Create stream\nI0501 00:00:20.229659 518 log.go:172] (0xc0009d8c60) (0xc0005e8e60) Stream added, broadcasting: 1\nI0501 00:00:20.232248 518 log.go:172] (0xc0009d8c60) Reply frame received for 1\nI0501 00:00:20.232284 518 log.go:172] (0xc0009d8c60) (0xc0005e9400) Create stream\nI0501 00:00:20.232294 518 log.go:172] (0xc0009d8c60) (0xc0005e9400) Stream added, broadcasting: 3\nI0501 00:00:20.233605 518 log.go:172] (0xc0009d8c60) Reply frame received for 3\nI0501 00:00:20.233642 518 log.go:172] (0xc0009d8c60) (0xc0005e9ae0) Create stream\nI0501 00:00:20.233656 518 log.go:172] (0xc0009d8c60) (0xc0005e9ae0) Stream added, broadcasting: 5\nI0501 00:00:20.234439 518 log.go:172] (0xc0009d8c60) Reply frame received for 5\nI0501 00:00:20.314751 518 log.go:172] (0xc0009d8c60) Data frame received for 5\nI0501 00:00:20.314799 518 log.go:172] (0xc0005e9ae0) (5) Data frame handling\nI0501 00:00:20.314858 518 log.go:172] (0xc0005e9ae0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0501 00:00:20.315220 518 log.go:172] (0xc0009d8c60) Data frame received for 5\nI0501 00:00:20.315243 518 log.go:172] (0xc0005e9ae0) (5) Data frame handling\nI0501 00:00:20.315273 518 log.go:172] (0xc0005e9ae0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0501 00:00:20.315684 518 log.go:172] (0xc0009d8c60) Data frame received for 5\nI0501 00:00:20.315701 518 log.go:172] (0xc0005e9ae0) (5) Data frame handling\nI0501 00:00:20.316025 518 log.go:172] (0xc0009d8c60) Data frame received for 3\nI0501 00:00:20.316057 518 log.go:172] (0xc0005e9400) (3) Data frame handling\nI0501 00:00:20.317994 518 log.go:172] (0xc0009d8c60) Data frame received for 1\nI0501 00:00:20.318031 518 log.go:172] (0xc0005e8e60) (1) Data frame handling\nI0501 00:00:20.318057 518 log.go:172] (0xc0005e8e60) (1) Data frame sent\nI0501 00:00:20.318082 518 log.go:172] (0xc0009d8c60) (0xc0005e8e60) Stream removed, broadcasting: 1\nI0501 00:00:20.318103 518 log.go:172] (0xc0009d8c60) Go away received\nI0501 00:00:20.318626 518 log.go:172] (0xc0009d8c60) (0xc0005e8e60) Stream removed, broadcasting: 1\nI0501 00:00:20.318654 518 log.go:172] (0xc0009d8c60) (0xc0005e9400) Stream removed, broadcasting: 3\nI0501 00:00:20.318668 518 log.go:172] (0xc0009d8c60) (0xc0005e9ae0) Stream removed, broadcasting: 5\n" May 1 00:00:20.325: INFO: stdout: "" May 1 00:00:20.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2931 execpodpt4hn -- /bin/sh -x -c nc -zv -t -w 2 10.96.247.28 80' May 1 00:00:20.540: INFO: stderr: "I0501 00:00:20.456483 551 log.go:172] (0xc0009ca000) (0xc000338aa0) Create stream\nI0501 00:00:20.456546 551 log.go:172] (0xc0009ca000) (0xc000338aa0) Stream added, broadcasting: 1\nI0501 00:00:20.459176 551 log.go:172] (0xc0009ca000) Reply frame received for 1\nI0501 00:00:20.459215 551 log.go:172] (0xc0009ca000) (0xc00026cfa0) Create stream\nI0501 00:00:20.459226 551 log.go:172] (0xc0009ca000) (0xc00026cfa0) Stream added, broadcasting: 3\nI0501 00:00:20.460271 551 log.go:172] (0xc0009ca000) Reply frame received for 3\nI0501 00:00:20.460308 551 log.go:172] (0xc0009ca000) (0xc00016bea0) Create stream\nI0501 00:00:20.460317 551 log.go:172] (0xc0009ca000) (0xc00016bea0) Stream added, broadcasting: 5\nI0501 00:00:20.461491 551 log.go:172] (0xc0009ca000) Reply frame received for 5\nI0501 00:00:20.532674 551 log.go:172] (0xc0009ca000) Data frame received for 3\nI0501 00:00:20.532722 551 log.go:172] (0xc00026cfa0) (3) Data frame handling\nI0501 00:00:20.532755 551 log.go:172] (0xc0009ca000) Data frame received for 5\nI0501 00:00:20.532782 551 log.go:172] (0xc00016bea0) (5) Data frame handling\nI0501 00:00:20.532808 551 log.go:172] (0xc00016bea0) (5) Data frame sent\nI0501 00:00:20.532831 551 log.go:172] (0xc0009ca000) Data frame received for 5\nI0501 00:00:20.532849 551 log.go:172] (0xc00016bea0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.247.28 80\nConnection to 10.96.247.28 80 port [tcp/http] succeeded!\nI0501 00:00:20.534606 551 log.go:172] (0xc0009ca000) Data frame received for 1\nI0501 00:00:20.534636 551 log.go:172] (0xc000338aa0) (1) Data frame handling\nI0501 00:00:20.534664 551 log.go:172] (0xc000338aa0) (1) Data frame sent\nI0501 00:00:20.534889 551 log.go:172] (0xc0009ca000) (0xc000338aa0) Stream removed, broadcasting: 1\nI0501 00:00:20.534963 551 log.go:172] (0xc0009ca000) Go away received\nI0501 00:00:20.535121 551 log.go:172] (0xc0009ca000) (0xc000338aa0) Stream removed, broadcasting: 1\nI0501 00:00:20.535133 551 log.go:172] (0xc0009ca000) (0xc00026cfa0) Stream removed, broadcasting: 3\nI0501 00:00:20.535139 551 log.go:172] (0xc0009ca000) (0xc00016bea0) Stream removed, broadcasting: 5\n" May 1 00:00:20.540: INFO: stdout: "" May 1 00:00:20.540: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2931 execpodpt4hn -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32432' May 1 00:00:20.772: INFO: stderr: "I0501 00:00:20.687651 572 log.go:172] (0xc0009b9760) (0xc000b1a1e0) Create stream\nI0501 00:00:20.687704 572 log.go:172] (0xc0009b9760) (0xc000b1a1e0) Stream added, broadcasting: 1\nI0501 00:00:20.692677 572 log.go:172] (0xc0009b9760) Reply frame received for 1\nI0501 00:00:20.692724 572 log.go:172] (0xc0009b9760) (0xc000686140) Create stream\nI0501 00:00:20.692741 572 log.go:172] (0xc0009b9760) (0xc000686140) Stream added, broadcasting: 3\nI0501 00:00:20.693780 572 log.go:172] (0xc0009b9760) Reply frame received for 3\nI0501 00:00:20.693811 572 log.go:172] (0xc0009b9760) (0xc0006866e0) Create stream\nI0501 00:00:20.693827 572 log.go:172] (0xc0009b9760) (0xc0006866e0) Stream added, broadcasting: 5\nI0501 00:00:20.694587 572 log.go:172] (0xc0009b9760) Reply frame received for 5\nI0501 00:00:20.765574 572 log.go:172] (0xc0009b9760) Data frame received for 5\nI0501 00:00:20.765614 572 log.go:172] (0xc0006866e0) (5) Data frame handling\nI0501 00:00:20.765640 572 log.go:172] (0xc0006866e0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 32432\nI0501 00:00:20.766045 572 log.go:172] (0xc0009b9760) Data frame received for 5\nI0501 00:00:20.766071 572 log.go:172] (0xc0006866e0) (5) Data frame handling\nI0501 00:00:20.766083 572 log.go:172] (0xc0006866e0) (5) Data frame sent\nI0501 00:00:20.766090 572 log.go:172] (0xc0009b9760) Data frame received for 5\nI0501 00:00:20.766102 572 log.go:172] (0xc0006866e0) (5) Data frame handling\nConnection to 172.17.0.13 32432 port [tcp/32432] succeeded!\nI0501 00:00:20.766136 572 log.go:172] (0xc0009b9760) Data frame received for 3\nI0501 00:00:20.766161 572 log.go:172] (0xc000686140) (3) Data frame handling\nI0501 00:00:20.767554 572 log.go:172] (0xc0009b9760) Data frame received for 1\nI0501 00:00:20.767575 572 log.go:172] (0xc000b1a1e0) (1) Data frame handling\nI0501 00:00:20.767601 572 log.go:172] (0xc000b1a1e0) (1) Data frame sent\nI0501 00:00:20.767642 572 log.go:172] (0xc0009b9760) (0xc000b1a1e0) Stream removed, broadcasting: 1\nI0501 00:00:20.767692 572 log.go:172] (0xc0009b9760) Go away received\nI0501 00:00:20.767958 572 log.go:172] (0xc0009b9760) (0xc000b1a1e0) Stream removed, broadcasting: 1\nI0501 00:00:20.768036 572 log.go:172] (0xc0009b9760) (0xc000686140) Stream removed, broadcasting: 3\nI0501 00:00:20.768073 572 log.go:172] (0xc0009b9760) (0xc0006866e0) Stream removed, broadcasting: 5\n" May 1 00:00:20.772: INFO: stdout: "" May 1 00:00:20.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2931 execpodpt4hn -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32432' May 1 00:00:20.978: INFO: stderr: "I0501 00:00:20.903583 593 log.go:172] (0xc000a774a0) (0xc000ac0320) Create stream\nI0501 00:00:20.903651 593 log.go:172] (0xc000a774a0) (0xc000ac0320) Stream added, broadcasting: 1\nI0501 00:00:20.907628 593 log.go:172] (0xc000a774a0) Reply frame received for 1\nI0501 00:00:20.907654 593 log.go:172] (0xc000a774a0) (0xc000633c20) Create stream\nI0501 00:00:20.907661 593 log.go:172] (0xc000a774a0) (0xc000633c20) Stream added, broadcasting: 3\nI0501 00:00:20.908597 593 log.go:172] (0xc000a774a0) Reply frame received for 3\nI0501 00:00:20.908637 593 log.go:172] (0xc000a774a0) (0xc000ac03c0) Create stream\nI0501 00:00:20.908655 593 log.go:172] (0xc000a774a0) (0xc000ac03c0) Stream added, broadcasting: 5\nI0501 00:00:20.909692 593 log.go:172] (0xc000a774a0) Reply frame received for 5\nI0501 00:00:20.969975 593 log.go:172] (0xc000a774a0) Data frame received for 5\nI0501 00:00:20.969999 593 log.go:172] (0xc000ac03c0) (5) Data frame handling\nI0501 00:00:20.970201 593 log.go:172] (0xc000ac03c0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 32432\nConnection to 172.17.0.12 32432 port [tcp/32432] succeeded!\nI0501 00:00:20.970796 593 log.go:172] (0xc000a774a0) Data frame received for 5\nI0501 00:00:20.970825 593 log.go:172] (0xc000ac03c0) (5) Data frame handling\nI0501 00:00:20.973316 593 log.go:172] (0xc000a774a0) Data frame received for 3\nI0501 00:00:20.973380 593 log.go:172] (0xc000633c20) (3) Data frame handling\nI0501 00:00:20.973448 593 log.go:172] (0xc000a774a0) Data frame received for 1\nI0501 00:00:20.973481 593 log.go:172] (0xc000ac0320) (1) Data frame handling\nI0501 00:00:20.973506 593 log.go:172] (0xc000ac0320) (1) Data frame sent\nI0501 00:00:20.973532 593 log.go:172] (0xc000a774a0) (0xc000ac0320) Stream removed, broadcasting: 1\nI0501 00:00:20.973560 593 log.go:172] (0xc000a774a0) Go away received\nI0501 00:00:20.974165 593 log.go:172] (0xc000a774a0) (0xc000ac0320) Stream removed, broadcasting: 1\nI0501 00:00:20.974181 593 log.go:172] (0xc000a774a0) (0xc000633c20) Stream removed, broadcasting: 3\nI0501 00:00:20.974189 593 log.go:172] (0xc000a774a0) (0xc000ac03c0) Stream removed, broadcasting: 5\n" May 1 00:00:20.978: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:00:20.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2931" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:15.462 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":290,"completed":47,"skipped":912,"failed":0} [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:00:20.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 1 00:00:21.137: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4376' May 1 00:00:21.445: INFO: stderr: "" May 1 00:00:21.445: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 1 00:00:21.445: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4376' May 1 00:00:21.814: INFO: stderr: "" May 1 00:00:21.815: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 1 00:00:22.820: INFO: Selector matched 1 pods for map[app:agnhost] May 1 00:00:22.820: INFO: Found 0 / 1 May 1 00:00:23.819: INFO: Selector matched 1 pods for map[app:agnhost] May 1 00:00:23.819: INFO: Found 0 / 1 May 1 00:00:24.818: INFO: Selector matched 1 pods for map[app:agnhost] May 1 00:00:24.818: INFO: Found 1 / 1 May 1 00:00:24.819: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 1 00:00:24.820: INFO: Selector matched 1 pods for map[app:agnhost] May 1 00:00:24.820: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 1 00:00:24.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-zrrph --namespace=kubectl-4376' May 1 00:00:24.924: INFO: stderr: "" May 1 00:00:24.924: INFO: stdout: "Name: agnhost-master-zrrph\nNamespace: kubectl-4376\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Fri, 01 May 2020 00:00:21 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.59\nIPs:\n IP: 10.244.1.59\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://508ab2e51f97923f830c8e474094a5a7f9ddd62e853f4f01dffcbe553c5d8034\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 01 May 2020 00:00:24 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-zsl5h (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-zsl5h:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-zsl5h\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-4376/agnhost-master-zrrph to latest-worker\n Normal Pulled 2s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 1s kubelet, latest-worker Created container agnhost-master\n Normal Started 0s kubelet, latest-worker Started container agnhost-master\n" May 1 00:00:24.924: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-4376' May 1 00:00:25.086: INFO: stderr: "" May 1 00:00:25.086: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-4376\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-zrrph\n" May 1 00:00:25.087: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-4376' May 1 00:00:25.196: INFO: stderr: "" May 1 00:00:25.196: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-4376\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.109.107.152\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.59:6379\nSession Affinity: None\nEvents: \n" May 1 00:00:25.199: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' May 1 00:00:25.325: INFO: stderr: "" May 1 00:00:25.325: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Fri, 01 May 2020 00:00:17 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 01 May 2020 00:00:22 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 01 May 2020 00:00:22 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 01 May 2020 00:00:22 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 01 May 2020 00:00:22 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 38h\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 38h\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 38h\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 38h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 38h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 38h\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 38h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 38h\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 38h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 1 00:00:25.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-4376' May 1 00:00:25.433: INFO: stderr: "" May 1 00:00:25.433: INFO: stdout: "Name: kubectl-4376\nLabels: e2e-framework=kubectl\n e2e-run=9ab8d1a1-3d60-4d70-a889-b678d634ffae\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:00:25.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4376" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":290,"completed":48,"skipped":912,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:00:25.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 1 00:00:33.739: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 1 00:00:33.816: INFO: Pod pod-with-poststart-http-hook still exists May 1 00:00:35.816: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 1 00:00:35.821: INFO: Pod pod-with-poststart-http-hook still exists May 1 00:00:37.816: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 1 00:00:37.820: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:00:37.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-769" for this suite. • [SLOW TEST:12.384 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":290,"completed":49,"skipped":920,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:00:37.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 1 00:00:37.951: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:00:44.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5408" for this suite. • [SLOW TEST:6.640 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":290,"completed":50,"skipped":926,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:00:44.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-94546413-bdda-4d8d-b542-a57a63be31ff STEP: Creating a pod to test consume secrets May 1 00:00:44.584: INFO: Waiting up to 5m0s for pod "pod-secrets-8ce7ff0f-52ff-4ed4-8464-e001ea300c50" in namespace "secrets-3903" to be "Succeeded or Failed" May 1 00:00:44.604: INFO: Pod "pod-secrets-8ce7ff0f-52ff-4ed4-8464-e001ea300c50": Phase="Pending", Reason="", readiness=false. Elapsed: 19.847792ms May 1 00:00:46.608: INFO: Pod "pod-secrets-8ce7ff0f-52ff-4ed4-8464-e001ea300c50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023874628s May 1 00:00:48.613: INFO: Pod "pod-secrets-8ce7ff0f-52ff-4ed4-8464-e001ea300c50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028295873s STEP: Saw pod success May 1 00:00:48.613: INFO: Pod "pod-secrets-8ce7ff0f-52ff-4ed4-8464-e001ea300c50" satisfied condition "Succeeded or Failed" May 1 00:00:48.616: INFO: Trying to get logs from node latest-worker pod pod-secrets-8ce7ff0f-52ff-4ed4-8464-e001ea300c50 container secret-volume-test: STEP: delete the pod May 1 00:00:48.671: INFO: Waiting for pod pod-secrets-8ce7ff0f-52ff-4ed4-8464-e001ea300c50 to disappear May 1 00:00:48.707: INFO: Pod pod-secrets-8ce7ff0f-52ff-4ed4-8464-e001ea300c50 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:00:48.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3903" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":290,"completed":51,"skipped":932,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:00:48.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1075 May 1 00:00:52.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1075 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 1 00:00:53.109: INFO: stderr: "I0501 00:00:53.006549 757 log.go:172] (0xc000a88000) (0xc00014f7c0) Create stream\nI0501 00:00:53.006658 757 log.go:172] (0xc000a88000) (0xc00014f7c0) Stream added, broadcasting: 1\nI0501 00:00:53.008851 757 log.go:172] (0xc000a88000) Reply frame received for 1\nI0501 00:00:53.008890 757 log.go:172] (0xc000a88000) (0xc00014fcc0) Create stream\nI0501 00:00:53.008899 757 log.go:172] (0xc000a88000) (0xc00014fcc0) Stream added, broadcasting: 3\nI0501 00:00:53.009853 757 log.go:172] (0xc000a88000) Reply frame received for 3\nI0501 00:00:53.009881 757 log.go:172] (0xc000a88000) (0xc000b06500) Create stream\nI0501 00:00:53.009893 757 log.go:172] (0xc000a88000) (0xc000b06500) Stream added, broadcasting: 5\nI0501 00:00:53.010623 757 log.go:172] (0xc000a88000) Reply frame received for 5\nI0501 00:00:53.098046 757 log.go:172] (0xc000a88000) Data frame received for 5\nI0501 00:00:53.098081 757 log.go:172] (0xc000b06500) (5) Data frame handling\nI0501 00:00:53.098103 757 log.go:172] (0xc000b06500) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0501 00:00:53.102232 757 log.go:172] (0xc000a88000) Data frame received for 3\nI0501 00:00:53.102256 757 log.go:172] (0xc00014fcc0) (3) Data frame handling\nI0501 00:00:53.102277 757 log.go:172] (0xc00014fcc0) (3) Data frame sent\nI0501 00:00:53.102635 757 log.go:172] (0xc000a88000) Data frame received for 5\nI0501 00:00:53.102654 757 log.go:172] (0xc000b06500) (5) Data frame handling\nI0501 00:00:53.102748 757 log.go:172] (0xc000a88000) Data frame received for 3\nI0501 00:00:53.102777 757 log.go:172] (0xc00014fcc0) (3) Data frame handling\nI0501 00:00:53.104379 757 log.go:172] (0xc000a88000) Data frame received for 1\nI0501 00:00:53.104405 757 log.go:172] (0xc00014f7c0) (1) Data frame handling\nI0501 00:00:53.104420 757 log.go:172] (0xc00014f7c0) (1) Data frame sent\nI0501 00:00:53.104436 757 log.go:172] (0xc000a88000) (0xc00014f7c0) Stream removed, broadcasting: 1\nI0501 00:00:53.104459 757 log.go:172] (0xc000a88000) Go away received\nI0501 00:00:53.104892 757 log.go:172] (0xc000a88000) (0xc00014f7c0) Stream removed, broadcasting: 1\nI0501 00:00:53.104920 757 log.go:172] (0xc000a88000) (0xc00014fcc0) Stream removed, broadcasting: 3\nI0501 00:00:53.104931 757 log.go:172] (0xc000a88000) (0xc000b06500) Stream removed, broadcasting: 5\n" May 1 00:00:53.110: INFO: stdout: "iptables" May 1 00:00:53.110: INFO: proxyMode: iptables May 1 00:00:53.115: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 1 00:00:53.130: INFO: Pod kube-proxy-mode-detector still exists May 1 00:00:55.131: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 1 00:00:55.135: INFO: Pod kube-proxy-mode-detector still exists May 1 00:00:57.131: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 1 00:00:57.133: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-1075 STEP: creating replication controller affinity-clusterip-timeout in namespace services-1075 I0501 00:00:57.180192 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-1075, replica count: 3 I0501 00:01:00.230671 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 00:01:03.230946 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 1 00:01:03.237: INFO: Creating new exec pod May 1 00:01:08.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1075 execpod-affinity8rtks -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 1 00:01:08.461: INFO: stderr: "I0501 00:01:08.384973 778 log.go:172] (0xc00041c000) (0xc0002eedc0) Create stream\nI0501 00:01:08.385064 778 log.go:172] (0xc00041c000) (0xc0002eedc0) Stream added, broadcasting: 1\nI0501 00:01:08.388451 778 log.go:172] (0xc00041c000) Reply frame received for 1\nI0501 00:01:08.388504 778 log.go:172] (0xc00041c000) (0xc00013bcc0) Create stream\nI0501 00:01:08.388520 778 log.go:172] (0xc00041c000) (0xc00013bcc0) Stream added, broadcasting: 3\nI0501 00:01:08.389966 778 log.go:172] (0xc00041c000) Reply frame received for 3\nI0501 00:01:08.390003 778 log.go:172] (0xc00041c000) (0xc00058c280) Create stream\nI0501 00:01:08.390015 778 log.go:172] (0xc00041c000) (0xc00058c280) Stream added, broadcasting: 5\nI0501 00:01:08.390984 778 log.go:172] (0xc00041c000) Reply frame received for 5\nI0501 00:01:08.451692 778 log.go:172] (0xc00041c000) Data frame received for 5\nI0501 00:01:08.451747 778 log.go:172] (0xc00058c280) (5) Data frame handling\nI0501 00:01:08.451788 778 log.go:172] (0xc00058c280) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0501 00:01:08.452626 778 log.go:172] (0xc00041c000) Data frame received for 5\nI0501 00:01:08.452667 778 log.go:172] (0xc00058c280) (5) Data frame handling\nI0501 00:01:08.452701 778 log.go:172] (0xc00058c280) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0501 00:01:08.452982 778 log.go:172] (0xc00041c000) Data frame received for 5\nI0501 00:01:08.453019 778 log.go:172] (0xc00058c280) (5) Data frame handling\nI0501 00:01:08.454037 778 log.go:172] (0xc00041c000) Data frame received for 3\nI0501 00:01:08.454073 778 log.go:172] (0xc00013bcc0) (3) Data frame handling\nI0501 00:01:08.455851 778 log.go:172] (0xc00041c000) Data frame received for 1\nI0501 00:01:08.455881 778 log.go:172] (0xc0002eedc0) (1) Data frame handling\nI0501 00:01:08.455898 778 log.go:172] (0xc0002eedc0) (1) Data frame sent\nI0501 00:01:08.455918 778 log.go:172] (0xc00041c000) (0xc0002eedc0) Stream removed, broadcasting: 1\nI0501 00:01:08.455958 778 log.go:172] (0xc00041c000) Go away received\nI0501 00:01:08.456386 778 log.go:172] (0xc00041c000) (0xc0002eedc0) Stream removed, broadcasting: 1\nI0501 00:01:08.456421 778 log.go:172] (0xc00041c000) (0xc00013bcc0) Stream removed, broadcasting: 3\nI0501 00:01:08.456441 778 log.go:172] (0xc00041c000) (0xc00058c280) Stream removed, broadcasting: 5\n" May 1 00:01:08.461: INFO: stdout: "" May 1 00:01:08.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1075 execpod-affinity8rtks -- /bin/sh -x -c nc -zv -t -w 2 10.107.254.217 80' May 1 00:01:08.689: INFO: stderr: "I0501 00:01:08.609972 800 log.go:172] (0xc000416210) (0xc0005ee000) Create stream\nI0501 00:01:08.610055 800 log.go:172] (0xc000416210) (0xc0005ee000) Stream added, broadcasting: 1\nI0501 00:01:08.611608 800 log.go:172] (0xc000416210) Reply frame received for 1\nI0501 00:01:08.611655 800 log.go:172] (0xc000416210) (0xc0004cd180) Create stream\nI0501 00:01:08.611665 800 log.go:172] (0xc000416210) (0xc0004cd180) Stream added, broadcasting: 3\nI0501 00:01:08.612296 800 log.go:172] (0xc000416210) Reply frame received for 3\nI0501 00:01:08.612331 800 log.go:172] (0xc000416210) (0xc0004561e0) Create stream\nI0501 00:01:08.612342 800 log.go:172] (0xc000416210) (0xc0004561e0) Stream added, broadcasting: 5\nI0501 00:01:08.612968 800 log.go:172] (0xc000416210) Reply frame received for 5\nI0501 00:01:08.679739 800 log.go:172] (0xc000416210) Data frame received for 5\nI0501 00:01:08.679764 800 log.go:172] (0xc0004561e0) (5) Data frame handling\nI0501 00:01:08.679772 800 log.go:172] (0xc0004561e0) (5) Data frame sent\nI0501 00:01:08.679777 800 log.go:172] (0xc000416210) Data frame received for 5\nI0501 00:01:08.679782 800 log.go:172] (0xc0004561e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.254.217 80\nConnection to 10.107.254.217 80 port [tcp/http] succeeded!\nI0501 00:01:08.679820 800 log.go:172] (0xc000416210) Data frame received for 3\nI0501 00:01:08.679845 800 log.go:172] (0xc0004cd180) (3) Data frame handling\nI0501 00:01:08.680989 800 log.go:172] (0xc000416210) Data frame received for 1\nI0501 00:01:08.681020 800 log.go:172] (0xc0005ee000) (1) Data frame handling\nI0501 00:01:08.681037 800 log.go:172] (0xc0005ee000) (1) Data frame sent\nI0501 00:01:08.681054 800 log.go:172] (0xc000416210) (0xc0005ee000) Stream removed, broadcasting: 1\nI0501 00:01:08.681341 800 log.go:172] (0xc000416210) Go away received\nI0501 00:01:08.681634 800 log.go:172] (0xc000416210) (0xc0005ee000) Stream removed, broadcasting: 1\nI0501 00:01:08.681670 800 log.go:172] (0xc000416210) (0xc0004cd180) Stream removed, broadcasting: 3\nI0501 00:01:08.681683 800 log.go:172] (0xc000416210) (0xc0004561e0) Stream removed, broadcasting: 5\n" May 1 00:01:08.690: INFO: stdout: "" May 1 00:01:08.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1075 execpod-affinity8rtks -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.107.254.217:80/ ; done' May 1 00:01:08.990: INFO: stderr: "I0501 00:01:08.822386 820 log.go:172] (0xc000b9b340) (0xc000bca460) Create stream\nI0501 00:01:08.822447 820 log.go:172] (0xc000b9b340) (0xc000bca460) Stream added, broadcasting: 1\nI0501 00:01:08.826844 820 log.go:172] (0xc000b9b340) Reply frame received for 1\nI0501 00:01:08.826883 820 log.go:172] (0xc000b9b340) (0xc000854be0) Create stream\nI0501 00:01:08.826893 820 log.go:172] (0xc000b9b340) (0xc000854be0) Stream added, broadcasting: 3\nI0501 00:01:08.827869 820 log.go:172] (0xc000b9b340) Reply frame received for 3\nI0501 00:01:08.827904 820 log.go:172] (0xc000b9b340) (0xc0005c2f00) Create stream\nI0501 00:01:08.827914 820 log.go:172] (0xc000b9b340) (0xc0005c2f00) Stream added, broadcasting: 5\nI0501 00:01:08.828828 820 log.go:172] (0xc000b9b340) Reply frame received for 5\nI0501 00:01:08.897371 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.897476 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.897493 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.897520 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.897537 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.897563 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:08.902493 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.902526 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.902545 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.903003 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.903050 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.903073 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:08.903113 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.903139 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.903166 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.911464 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.911508 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.911539 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.912098 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.912130 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.912146 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.912175 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.912214 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.912248 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:08.918093 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.918111 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.918129 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.918619 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.918643 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.918654 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\nI0501 00:01:08.918676 820 log.go:172] (0xc000b9b340) Data frame received for 5\n+ echo\n+ curl -q -sI0501 00:01:08.918702 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.918717 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\n --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:08.918737 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.918753 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.918775 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.922431 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.922459 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.922487 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.922917 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.922941 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.922950 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.922977 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.923009 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.923038 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:08.927851 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.927881 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.927914 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.928259 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.928296 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.928322 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\nI0501 00:01:08.928335 820 log.go:172] (0xc000b9b340) Data frame received for 5\n+ echo\nI0501 00:01:08.928345 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.928418 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:08.928445 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.928467 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.928498 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.932800 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.932820 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.932836 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.933644 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.933681 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.933697 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.933705 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.933717 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.933727 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\nI0501 00:01:08.933738 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.933745 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\n+ echo\nI0501 00:01:08.933758 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\nI0501 00:01:08.933765 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.933778 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.933793 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:08.937978 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.938001 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.938010 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.938506 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.938539 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.938553 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:08.938579 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.938617 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.938655 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.942826 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.942864 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.942891 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.943317 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.943339 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.943354 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.943371 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.943384 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.943399 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:08.948885 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.948912 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.948932 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.949632 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.949674 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.949705 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.949730 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:08.949756 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.949774 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.953719 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.953740 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.953775 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.954105 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.954159 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.954185 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.954215 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.954235 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.954266 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:08.959380 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.959415 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.959454 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.960058 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.960111 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.960142 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.960172 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.960184 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.960195 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:08.963313 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.963332 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.963362 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.964133 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.964153 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.964171 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.964189 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.964214 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.964246 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\nI0501 00:01:08.964264 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.964280 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:08.964313 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\nI0501 00:01:08.967553 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.967570 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.967593 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.967887 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.967899 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.967908 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\nI0501 00:01:08.967917 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.967929 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:08.967951 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\nI0501 00:01:08.968109 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.968131 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.968149 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.973370 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.973388 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.973403 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.974166 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.974210 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.974228 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.974269 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.974306 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.974343 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:08.979736 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.979749 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:08.979762 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.979779 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.979790 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.979798 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.979808 820 log.go:172] (0xc0005c2f00) (5) Data frame sent\nI0501 00:01:08.979823 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.979831 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.982393 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.982432 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.982465 820 log.go:172] (0xc000854be0) (3) Data frame sent\nI0501 00:01:08.982852 820 log.go:172] (0xc000b9b340) Data frame received for 3\nI0501 00:01:08.982871 820 log.go:172] (0xc000854be0) (3) Data frame handling\nI0501 00:01:08.983149 820 log.go:172] (0xc000b9b340) Data frame received for 5\nI0501 00:01:08.983175 820 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0501 00:01:08.984794 820 log.go:172] (0xc000b9b340) Data frame received for 1\nI0501 00:01:08.984821 820 log.go:172] (0xc000bca460) (1) Data frame handling\nI0501 00:01:08.984832 820 log.go:172] (0xc000bca460) (1) Data frame sent\nI0501 00:01:08.984903 820 log.go:172] (0xc000b9b340) (0xc000bca460) Stream removed, broadcasting: 1\nI0501 00:01:08.985007 820 log.go:172] (0xc000b9b340) Go away received\nI0501 00:01:08.985405 820 log.go:172] (0xc000b9b340) (0xc000bca460) Stream removed, broadcasting: 1\nI0501 00:01:08.985488 820 log.go:172] (0xc000b9b340) (0xc000854be0) Stream removed, broadcasting: 3\nI0501 00:01:08.985555 820 log.go:172] (0xc000b9b340) (0xc0005c2f00) Stream removed, broadcasting: 5\n" May 1 00:01:08.990: INFO: stdout: "\naffinity-clusterip-timeout-bsglt\naffinity-clusterip-timeout-bsglt\naffinity-clusterip-timeout-bsglt\naffinity-clusterip-timeout-bsglt\naffinity-clusterip-timeout-bsglt\naffinity-clusterip-timeout-bsglt\naffinity-clusterip-timeout-bsglt\naffinity-clusterip-timeout-bsglt\naffinity-clusterip-timeout-bsglt\naffinity-clusterip-timeout-bsglt\naffinity-clusterip-timeout-bsglt\naffinity-clusterip-timeout-bsglt\naffinity-clusterip-timeout-bsglt\naffinity-clusterip-timeout-bsglt\naffinity-clusterip-timeout-bsglt\naffinity-clusterip-timeout-bsglt" May 1 00:01:08.990: INFO: Received response from host: May 1 00:01:08.990: INFO: Received response from host: affinity-clusterip-timeout-bsglt May 1 00:01:08.990: INFO: Received response from host: affinity-clusterip-timeout-bsglt May 1 00:01:08.990: INFO: Received response from host: affinity-clusterip-timeout-bsglt May 1 00:01:08.990: INFO: Received response from host: affinity-clusterip-timeout-bsglt May 1 00:01:08.990: INFO: Received response from host: affinity-clusterip-timeout-bsglt May 1 00:01:08.990: INFO: Received response from host: affinity-clusterip-timeout-bsglt May 1 00:01:08.990: INFO: Received response from host: affinity-clusterip-timeout-bsglt May 1 00:01:08.990: INFO: Received response from host: affinity-clusterip-timeout-bsglt May 1 00:01:08.990: INFO: Received response from host: affinity-clusterip-timeout-bsglt May 1 00:01:08.990: INFO: Received response from host: affinity-clusterip-timeout-bsglt May 1 00:01:08.990: INFO: Received response from host: affinity-clusterip-timeout-bsglt May 1 00:01:08.990: INFO: Received response from host: affinity-clusterip-timeout-bsglt May 1 00:01:08.990: INFO: Received response from host: affinity-clusterip-timeout-bsglt May 1 00:01:08.990: INFO: Received response from host: affinity-clusterip-timeout-bsglt May 1 00:01:08.990: INFO: Received response from host: affinity-clusterip-timeout-bsglt May 1 00:01:08.990: INFO: Received response from host: affinity-clusterip-timeout-bsglt May 1 00:01:08.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1075 execpod-affinity8rtks -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.107.254.217:80/' May 1 00:01:09.198: INFO: stderr: "I0501 00:01:09.116017 840 log.go:172] (0xc000a5fc30) (0xc0009e81e0) Create stream\nI0501 00:01:09.116077 840 log.go:172] (0xc000a5fc30) (0xc0009e81e0) Stream added, broadcasting: 1\nI0501 00:01:09.121565 840 log.go:172] (0xc000a5fc30) Reply frame received for 1\nI0501 00:01:09.121598 840 log.go:172] (0xc000a5fc30) (0xc000550460) Create stream\nI0501 00:01:09.121608 840 log.go:172] (0xc000a5fc30) (0xc000550460) Stream added, broadcasting: 3\nI0501 00:01:09.122582 840 log.go:172] (0xc000a5fc30) Reply frame received for 3\nI0501 00:01:09.122614 840 log.go:172] (0xc000a5fc30) (0xc00053a140) Create stream\nI0501 00:01:09.122624 840 log.go:172] (0xc000a5fc30) (0xc00053a140) Stream added, broadcasting: 5\nI0501 00:01:09.123458 840 log.go:172] (0xc000a5fc30) Reply frame received for 5\nI0501 00:01:09.184574 840 log.go:172] (0xc000a5fc30) Data frame received for 5\nI0501 00:01:09.184608 840 log.go:172] (0xc00053a140) (5) Data frame handling\nI0501 00:01:09.184629 840 log.go:172] (0xc00053a140) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:09.190308 840 log.go:172] (0xc000a5fc30) Data frame received for 3\nI0501 00:01:09.190347 840 log.go:172] (0xc000550460) (3) Data frame handling\nI0501 00:01:09.190378 840 log.go:172] (0xc000550460) (3) Data frame sent\nI0501 00:01:09.190676 840 log.go:172] (0xc000a5fc30) Data frame received for 5\nI0501 00:01:09.190725 840 log.go:172] (0xc00053a140) (5) Data frame handling\nI0501 00:01:09.190758 840 log.go:172] (0xc000a5fc30) Data frame received for 3\nI0501 00:01:09.190780 840 log.go:172] (0xc000550460) (3) Data frame handling\nI0501 00:01:09.192437 840 log.go:172] (0xc000a5fc30) Data frame received for 1\nI0501 00:01:09.192460 840 log.go:172] (0xc0009e81e0) (1) Data frame handling\nI0501 00:01:09.192508 840 log.go:172] (0xc0009e81e0) (1) Data frame sent\nI0501 00:01:09.192528 840 log.go:172] (0xc000a5fc30) (0xc0009e81e0) Stream removed, broadcasting: 1\nI0501 00:01:09.192575 840 log.go:172] (0xc000a5fc30) Go away received\nI0501 00:01:09.193013 840 log.go:172] (0xc000a5fc30) (0xc0009e81e0) Stream removed, broadcasting: 1\nI0501 00:01:09.193038 840 log.go:172] (0xc000a5fc30) (0xc000550460) Stream removed, broadcasting: 3\nI0501 00:01:09.193050 840 log.go:172] (0xc000a5fc30) (0xc00053a140) Stream removed, broadcasting: 5\n" May 1 00:01:09.198: INFO: stdout: "affinity-clusterip-timeout-bsglt" May 1 00:01:24.198: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1075 execpod-affinity8rtks -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.107.254.217:80/' May 1 00:01:24.454: INFO: stderr: "I0501 00:01:24.353863 862 log.go:172] (0xc00003a420) (0xc0004a70e0) Create stream\nI0501 00:01:24.353920 862 log.go:172] (0xc00003a420) (0xc0004a70e0) Stream added, broadcasting: 1\nI0501 00:01:24.356269 862 log.go:172] (0xc00003a420) Reply frame received for 1\nI0501 00:01:24.356326 862 log.go:172] (0xc00003a420) (0xc00038ec80) Create stream\nI0501 00:01:24.356339 862 log.go:172] (0xc00003a420) (0xc00038ec80) Stream added, broadcasting: 3\nI0501 00:01:24.357424 862 log.go:172] (0xc00003a420) Reply frame received for 3\nI0501 00:01:24.357472 862 log.go:172] (0xc00003a420) (0xc00051e6e0) Create stream\nI0501 00:01:24.357492 862 log.go:172] (0xc00003a420) (0xc00051e6e0) Stream added, broadcasting: 5\nI0501 00:01:24.358353 862 log.go:172] (0xc00003a420) Reply frame received for 5\nI0501 00:01:24.440527 862 log.go:172] (0xc00003a420) Data frame received for 5\nI0501 00:01:24.440558 862 log.go:172] (0xc00051e6e0) (5) Data frame handling\nI0501 00:01:24.440577 862 log.go:172] (0xc00051e6e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:24.446632 862 log.go:172] (0xc00003a420) Data frame received for 3\nI0501 00:01:24.446655 862 log.go:172] (0xc00038ec80) (3) Data frame handling\nI0501 00:01:24.446667 862 log.go:172] (0xc00038ec80) (3) Data frame sent\nI0501 00:01:24.446972 862 log.go:172] (0xc00003a420) Data frame received for 5\nI0501 00:01:24.447005 862 log.go:172] (0xc00051e6e0) (5) Data frame handling\nI0501 00:01:24.447023 862 log.go:172] (0xc00003a420) Data frame received for 3\nI0501 00:01:24.447029 862 log.go:172] (0xc00038ec80) (3) Data frame handling\nI0501 00:01:24.448742 862 log.go:172] (0xc00003a420) Data frame received for 1\nI0501 00:01:24.448759 862 log.go:172] (0xc0004a70e0) (1) Data frame handling\nI0501 00:01:24.448769 862 log.go:172] (0xc0004a70e0) (1) Data frame sent\nI0501 00:01:24.448836 862 log.go:172] (0xc00003a420) (0xc0004a70e0) Stream removed, broadcasting: 1\nI0501 00:01:24.448981 862 log.go:172] (0xc00003a420) Go away received\nI0501 00:01:24.449292 862 log.go:172] (0xc00003a420) (0xc0004a70e0) Stream removed, broadcasting: 1\nI0501 00:01:24.449312 862 log.go:172] (0xc00003a420) (0xc00038ec80) Stream removed, broadcasting: 3\nI0501 00:01:24.449324 862 log.go:172] (0xc00003a420) (0xc00051e6e0) Stream removed, broadcasting: 5\n" May 1 00:01:24.454: INFO: stdout: "affinity-clusterip-timeout-bsglt" May 1 00:01:39.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1075 execpod-affinity8rtks -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.107.254.217:80/' May 1 00:01:39.715: INFO: stderr: "I0501 00:01:39.612197 885 log.go:172] (0xc0000e8c60) (0xc0004041e0) Create stream\nI0501 00:01:39.612275 885 log.go:172] (0xc0000e8c60) (0xc0004041e0) Stream added, broadcasting: 1\nI0501 00:01:39.615354 885 log.go:172] (0xc0000e8c60) Reply frame received for 1\nI0501 00:01:39.615451 885 log.go:172] (0xc0000e8c60) (0xc00071ee60) Create stream\nI0501 00:01:39.615467 885 log.go:172] (0xc0000e8c60) (0xc00071ee60) Stream added, broadcasting: 3\nI0501 00:01:39.616670 885 log.go:172] (0xc0000e8c60) Reply frame received for 3\nI0501 00:01:39.616710 885 log.go:172] (0xc0000e8c60) (0xc0007125a0) Create stream\nI0501 00:01:39.616726 885 log.go:172] (0xc0000e8c60) (0xc0007125a0) Stream added, broadcasting: 5\nI0501 00:01:39.618038 885 log.go:172] (0xc0000e8c60) Reply frame received for 5\nI0501 00:01:39.705688 885 log.go:172] (0xc0000e8c60) Data frame received for 5\nI0501 00:01:39.705718 885 log.go:172] (0xc0007125a0) (5) Data frame handling\nI0501 00:01:39.705736 885 log.go:172] (0xc0007125a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.107.254.217:80/\nI0501 00:01:39.707218 885 log.go:172] (0xc0000e8c60) Data frame received for 3\nI0501 00:01:39.707247 885 log.go:172] (0xc00071ee60) (3) Data frame handling\nI0501 00:01:39.707268 885 log.go:172] (0xc00071ee60) (3) Data frame sent\nI0501 00:01:39.707653 885 log.go:172] (0xc0000e8c60) Data frame received for 5\nI0501 00:01:39.707688 885 log.go:172] (0xc0007125a0) (5) Data frame handling\nI0501 00:01:39.707715 885 log.go:172] (0xc0000e8c60) Data frame received for 3\nI0501 00:01:39.707732 885 log.go:172] (0xc00071ee60) (3) Data frame handling\nI0501 00:01:39.709663 885 log.go:172] (0xc0000e8c60) Data frame received for 1\nI0501 00:01:39.709698 885 log.go:172] (0xc0004041e0) (1) Data frame handling\nI0501 00:01:39.709723 885 log.go:172] (0xc0004041e0) (1) Data frame sent\nI0501 00:01:39.709755 885 log.go:172] (0xc0000e8c60) (0xc0004041e0) Stream removed, broadcasting: 1\nI0501 00:01:39.709791 885 log.go:172] (0xc0000e8c60) Go away received\nI0501 00:01:39.710165 885 log.go:172] (0xc0000e8c60) (0xc0004041e0) Stream removed, broadcasting: 1\nI0501 00:01:39.710189 885 log.go:172] (0xc0000e8c60) (0xc00071ee60) Stream removed, broadcasting: 3\nI0501 00:01:39.710207 885 log.go:172] (0xc0000e8c60) (0xc0007125a0) Stream removed, broadcasting: 5\n" May 1 00:01:39.715: INFO: stdout: "affinity-clusterip-timeout-8l9b7" May 1 00:01:39.715: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-1075, will wait for the garbage collector to delete the pods May 1 00:01:39.851: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 5.853415ms May 1 00:01:40.251: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 400.247848ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:01:55.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1075" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:66.628 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":290,"completed":52,"skipped":939,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:01:55.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:01:55.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3071" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":290,"completed":53,"skipped":942,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:01:55.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 00:01:56.248: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 00:01:58.471: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888116, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888116, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888116, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888116, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 00:02:01.532: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:02:04.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3915" for this suite. STEP: Destroying namespace "webhook-3915-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.632 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":290,"completed":54,"skipped":980,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:02:04.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-80f5c8d9-7be9-4e02-9340-1512537cd377 STEP: Creating a pod to test consume secrets May 1 00:02:04.568: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bea921cb-c9f7-4a6f-836d-5b1cf2fbcb95" in namespace "projected-6543" to be "Succeeded or Failed" May 1 00:02:04.593: INFO: Pod "pod-projected-secrets-bea921cb-c9f7-4a6f-836d-5b1cf2fbcb95": Phase="Pending", Reason="", readiness=false. Elapsed: 24.946948ms May 1 00:02:06.597: INFO: Pod "pod-projected-secrets-bea921cb-c9f7-4a6f-836d-5b1cf2fbcb95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029424061s May 1 00:02:08.602: INFO: Pod "pod-projected-secrets-bea921cb-c9f7-4a6f-836d-5b1cf2fbcb95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034213028s STEP: Saw pod success May 1 00:02:08.602: INFO: Pod "pod-projected-secrets-bea921cb-c9f7-4a6f-836d-5b1cf2fbcb95" satisfied condition "Succeeded or Failed" May 1 00:02:08.606: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-bea921cb-c9f7-4a6f-836d-5b1cf2fbcb95 container projected-secret-volume-test: STEP: delete the pod May 1 00:02:08.647: INFO: Waiting for pod pod-projected-secrets-bea921cb-c9f7-4a6f-836d-5b1cf2fbcb95 to disappear May 1 00:02:08.660: INFO: Pod pod-projected-secrets-bea921cb-c9f7-4a6f-836d-5b1cf2fbcb95 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:02:08.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6543" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":55,"skipped":996,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:02:08.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9774.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9774.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9774.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9774.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 00:02:14.807: INFO: DNS probes using dns-test-e16631cd-9e46-494e-b589-4297cd1f97e6 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9774.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9774.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9774.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9774.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 00:02:22.942: INFO: File wheezy_udp@dns-test-service-3.dns-9774.svc.cluster.local from pod dns-9774/dns-test-97842a83-aaf3-4180-9577-aa8baafb1e99 contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 00:02:22.946: INFO: File jessie_udp@dns-test-service-3.dns-9774.svc.cluster.local from pod dns-9774/dns-test-97842a83-aaf3-4180-9577-aa8baafb1e99 contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 00:02:22.946: INFO: Lookups using dns-9774/dns-test-97842a83-aaf3-4180-9577-aa8baafb1e99 failed for: [wheezy_udp@dns-test-service-3.dns-9774.svc.cluster.local jessie_udp@dns-test-service-3.dns-9774.svc.cluster.local] May 1 00:02:27.959: INFO: File wheezy_udp@dns-test-service-3.dns-9774.svc.cluster.local from pod dns-9774/dns-test-97842a83-aaf3-4180-9577-aa8baafb1e99 contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 00:02:27.962: INFO: File jessie_udp@dns-test-service-3.dns-9774.svc.cluster.local from pod dns-9774/dns-test-97842a83-aaf3-4180-9577-aa8baafb1e99 contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 00:02:27.963: INFO: Lookups using dns-9774/dns-test-97842a83-aaf3-4180-9577-aa8baafb1e99 failed for: [wheezy_udp@dns-test-service-3.dns-9774.svc.cluster.local jessie_udp@dns-test-service-3.dns-9774.svc.cluster.local] May 1 00:02:32.950: INFO: File wheezy_udp@dns-test-service-3.dns-9774.svc.cluster.local from pod dns-9774/dns-test-97842a83-aaf3-4180-9577-aa8baafb1e99 contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 00:02:32.955: INFO: File jessie_udp@dns-test-service-3.dns-9774.svc.cluster.local from pod dns-9774/dns-test-97842a83-aaf3-4180-9577-aa8baafb1e99 contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 00:02:32.955: INFO: Lookups using dns-9774/dns-test-97842a83-aaf3-4180-9577-aa8baafb1e99 failed for: [wheezy_udp@dns-test-service-3.dns-9774.svc.cluster.local jessie_udp@dns-test-service-3.dns-9774.svc.cluster.local] May 1 00:02:37.951: INFO: File wheezy_udp@dns-test-service-3.dns-9774.svc.cluster.local from pod dns-9774/dns-test-97842a83-aaf3-4180-9577-aa8baafb1e99 contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 00:02:37.955: INFO: File jessie_udp@dns-test-service-3.dns-9774.svc.cluster.local from pod dns-9774/dns-test-97842a83-aaf3-4180-9577-aa8baafb1e99 contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 00:02:37.955: INFO: Lookups using dns-9774/dns-test-97842a83-aaf3-4180-9577-aa8baafb1e99 failed for: [wheezy_udp@dns-test-service-3.dns-9774.svc.cluster.local jessie_udp@dns-test-service-3.dns-9774.svc.cluster.local] May 1 00:02:42.951: INFO: File wheezy_udp@dns-test-service-3.dns-9774.svc.cluster.local from pod dns-9774/dns-test-97842a83-aaf3-4180-9577-aa8baafb1e99 contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 00:02:42.956: INFO: File jessie_udp@dns-test-service-3.dns-9774.svc.cluster.local from pod dns-9774/dns-test-97842a83-aaf3-4180-9577-aa8baafb1e99 contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 00:02:42.956: INFO: Lookups using dns-9774/dns-test-97842a83-aaf3-4180-9577-aa8baafb1e99 failed for: [wheezy_udp@dns-test-service-3.dns-9774.svc.cluster.local jessie_udp@dns-test-service-3.dns-9774.svc.cluster.local] May 1 00:02:47.957: INFO: DNS probes using dns-test-97842a83-aaf3-4180-9577-aa8baafb1e99 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9774.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9774.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9774.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9774.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 00:02:56.704: INFO: DNS probes using dns-test-78731692-12bc-4d4b-810e-4e092833cb4a succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:02:56.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9774" for this suite. • [SLOW TEST:48.148 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":290,"completed":56,"skipped":999,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:02:56.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 1 00:02:57.212: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a40ad699-a70b-4f02-b94c-976a6f47b966" in namespace "downward-api-8642" to be "Succeeded or Failed" May 1 00:02:57.237: INFO: Pod "downwardapi-volume-a40ad699-a70b-4f02-b94c-976a6f47b966": Phase="Pending", Reason="", readiness=false. Elapsed: 24.501908ms May 1 00:02:59.241: INFO: Pod "downwardapi-volume-a40ad699-a70b-4f02-b94c-976a6f47b966": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028665719s May 1 00:03:01.245: INFO: Pod "downwardapi-volume-a40ad699-a70b-4f02-b94c-976a6f47b966": Phase="Running", Reason="", readiness=true. Elapsed: 4.032829876s May 1 00:03:03.249: INFO: Pod "downwardapi-volume-a40ad699-a70b-4f02-b94c-976a6f47b966": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037111613s STEP: Saw pod success May 1 00:03:03.250: INFO: Pod "downwardapi-volume-a40ad699-a70b-4f02-b94c-976a6f47b966" satisfied condition "Succeeded or Failed" May 1 00:03:03.252: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a40ad699-a70b-4f02-b94c-976a6f47b966 container client-container: STEP: delete the pod May 1 00:03:03.302: INFO: Waiting for pod downwardapi-volume-a40ad699-a70b-4f02-b94c-976a6f47b966 to disappear May 1 00:03:03.315: INFO: Pod downwardapi-volume-a40ad699-a70b-4f02-b94c-976a6f47b966 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:03:03.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8642" for this suite. • [SLOW TEST:6.502 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":57,"skipped":1017,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:03:03.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 1 00:03:03.438: INFO: The status of Pod test-webserver-da5dfc47-d910-4724-b6c6-4ae592eaa433 is Pending, waiting for it to be Running (with Ready = true) May 1 00:03:05.442: INFO: The status of Pod test-webserver-da5dfc47-d910-4724-b6c6-4ae592eaa433 is Pending, waiting for it to be Running (with Ready = true) May 1 00:03:07.442: INFO: The status of Pod test-webserver-da5dfc47-d910-4724-b6c6-4ae592eaa433 is Running (Ready = false) May 1 00:03:09.442: INFO: The status of Pod test-webserver-da5dfc47-d910-4724-b6c6-4ae592eaa433 is Running (Ready = false) May 1 00:03:11.442: INFO: The status of Pod test-webserver-da5dfc47-d910-4724-b6c6-4ae592eaa433 is Running (Ready = false) May 1 00:03:13.443: INFO: The status of Pod test-webserver-da5dfc47-d910-4724-b6c6-4ae592eaa433 is Running (Ready = false) May 1 00:03:15.442: INFO: The status of Pod test-webserver-da5dfc47-d910-4724-b6c6-4ae592eaa433 is Running (Ready = false) May 1 00:03:17.442: INFO: The status of Pod test-webserver-da5dfc47-d910-4724-b6c6-4ae592eaa433 is Running (Ready = false) May 1 00:03:19.441: INFO: The status of Pod test-webserver-da5dfc47-d910-4724-b6c6-4ae592eaa433 is Running (Ready = false) May 1 00:03:21.443: INFO: The status of Pod test-webserver-da5dfc47-d910-4724-b6c6-4ae592eaa433 is Running (Ready = false) May 1 00:03:23.442: INFO: The status of Pod test-webserver-da5dfc47-d910-4724-b6c6-4ae592eaa433 is Running (Ready = false) May 1 00:03:25.442: INFO: The status of Pod test-webserver-da5dfc47-d910-4724-b6c6-4ae592eaa433 is Running (Ready = false) May 1 00:03:27.442: INFO: The status of Pod test-webserver-da5dfc47-d910-4724-b6c6-4ae592eaa433 is Running (Ready = false) May 1 00:03:29.442: INFO: The status of Pod test-webserver-da5dfc47-d910-4724-b6c6-4ae592eaa433 is Running (Ready = false) May 1 00:03:31.443: INFO: The status of Pod test-webserver-da5dfc47-d910-4724-b6c6-4ae592eaa433 is Running (Ready = true) May 1 00:03:31.446: INFO: Container started at 2020-05-01 00:03:05 +0000 UTC, pod became ready at 2020-05-01 00:03:29 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:03:31.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7246" for this suite. • [SLOW TEST:28.131 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":290,"completed":58,"skipped":1029,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:03:31.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-cc1d01c4-ba63-4f60-a638-51a9ff0c850e STEP: Creating a pod to test consume secrets May 1 00:03:31.572: INFO: Waiting up to 5m0s for pod "pod-secrets-9949b1d1-55ab-43bd-a685-b5f08273ca2e" in namespace "secrets-4981" to be "Succeeded or Failed" May 1 00:03:31.599: INFO: Pod "pod-secrets-9949b1d1-55ab-43bd-a685-b5f08273ca2e": Phase="Pending", Reason="", readiness=false. Elapsed: 26.662974ms May 1 00:03:33.602: INFO: Pod "pod-secrets-9949b1d1-55ab-43bd-a685-b5f08273ca2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030058468s May 1 00:03:35.607: INFO: Pod "pod-secrets-9949b1d1-55ab-43bd-a685-b5f08273ca2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034751176s STEP: Saw pod success May 1 00:03:35.607: INFO: Pod "pod-secrets-9949b1d1-55ab-43bd-a685-b5f08273ca2e" satisfied condition "Succeeded or Failed" May 1 00:03:35.611: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-9949b1d1-55ab-43bd-a685-b5f08273ca2e container secret-volume-test: STEP: delete the pod May 1 00:03:35.783: INFO: Waiting for pod pod-secrets-9949b1d1-55ab-43bd-a685-b5f08273ca2e to disappear May 1 00:03:35.795: INFO: Pod pod-secrets-9949b1d1-55ab-43bd-a685-b5f08273ca2e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:03:35.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4981" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":59,"skipped":1089,"failed":0} ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:03:35.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 1 00:03:35.957: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b615a6f5-0807-4a74-8a07-620d52856ef8" in namespace "downward-api-3688" to be "Succeeded or Failed" May 1 00:03:35.990: INFO: Pod "downwardapi-volume-b615a6f5-0807-4a74-8a07-620d52856ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 32.995528ms May 1 00:03:38.000: INFO: Pod "downwardapi-volume-b615a6f5-0807-4a74-8a07-620d52856ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042857522s May 1 00:03:40.085: INFO: Pod "downwardapi-volume-b615a6f5-0807-4a74-8a07-620d52856ef8": Phase="Running", Reason="", readiness=true. Elapsed: 4.128172425s May 1 00:03:42.089: INFO: Pod "downwardapi-volume-b615a6f5-0807-4a74-8a07-620d52856ef8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.132148527s STEP: Saw pod success May 1 00:03:42.089: INFO: Pod "downwardapi-volume-b615a6f5-0807-4a74-8a07-620d52856ef8" satisfied condition "Succeeded or Failed" May 1 00:03:42.092: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b615a6f5-0807-4a74-8a07-620d52856ef8 container client-container: STEP: delete the pod May 1 00:03:42.174: INFO: Waiting for pod downwardapi-volume-b615a6f5-0807-4a74-8a07-620d52856ef8 to disappear May 1 00:03:42.191: INFO: Pod downwardapi-volume-b615a6f5-0807-4a74-8a07-620d52856ef8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:03:42.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3688" for this suite. • [SLOW TEST:6.396 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":60,"skipped":1089,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:03:42.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-2705 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2705 to expose endpoints map[] May 1 00:03:42.420: INFO: Get endpoints failed (10.352428ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 1 00:03:43.423: INFO: successfully validated that service endpoint-test2 in namespace services-2705 exposes endpoints map[] (1.013335308s elapsed) STEP: Creating pod pod1 in namespace services-2705 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2705 to expose endpoints map[pod1:[80]] May 1 00:03:47.868: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.438718423s elapsed, will retry) May 1 00:03:51.091: INFO: successfully validated that service endpoint-test2 in namespace services-2705 exposes endpoints map[pod1:[80]] (7.661950519s elapsed) STEP: Creating pod pod2 in namespace services-2705 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2705 to expose endpoints map[pod1:[80] pod2:[80]] May 1 00:03:55.491: INFO: Unexpected endpoints: found map[b0027469-4e22-44ed-97b7-b6548150202c:[80]], expected map[pod1:[80] pod2:[80]] (4.395364557s elapsed, will retry) May 1 00:03:58.898: INFO: successfully validated that service endpoint-test2 in namespace services-2705 exposes endpoints map[pod1:[80] pod2:[80]] (7.802797409s elapsed) STEP: Deleting pod pod1 in namespace services-2705 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2705 to expose endpoints map[pod2:[80]] May 1 00:03:59.998: INFO: successfully validated that service endpoint-test2 in namespace services-2705 exposes endpoints map[pod2:[80]] (1.094989329s elapsed) STEP: Deleting pod pod2 in namespace services-2705 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2705 to expose endpoints map[] May 1 00:04:02.180: INFO: successfully validated that service endpoint-test2 in namespace services-2705 exposes endpoints map[] (2.176980044s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:04:03.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2705" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:22.701 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":290,"completed":61,"skipped":1103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:04:04.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 1 00:04:16.475: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 00:04:16.492: INFO: Pod pod-with-prestop-exec-hook still exists May 1 00:04:18.492: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 00:04:18.505: INFO: Pod pod-with-prestop-exec-hook still exists May 1 00:04:20.492: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 00:04:20.506: INFO: Pod pod-with-prestop-exec-hook still exists May 1 00:04:22.492: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 00:04:22.496: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:04:22.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8759" for this suite. • [SLOW TEST:17.609 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":290,"completed":62,"skipped":1149,"failed":0} [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:04:22.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 1 00:04:22.571: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c03f80c-20d8-4825-9d3f-4f332a016b70" in namespace "projected-4089" to be "Succeeded or Failed" May 1 00:04:22.586: INFO: Pod "downwardapi-volume-1c03f80c-20d8-4825-9d3f-4f332a016b70": Phase="Pending", Reason="", readiness=false. Elapsed: 15.018998ms May 1 00:04:24.839: INFO: Pod "downwardapi-volume-1c03f80c-20d8-4825-9d3f-4f332a016b70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.268310478s May 1 00:04:26.843: INFO: Pod "downwardapi-volume-1c03f80c-20d8-4825-9d3f-4f332a016b70": Phase="Running", Reason="", readiness=true. Elapsed: 4.271840773s May 1 00:04:28.974: INFO: Pod "downwardapi-volume-1c03f80c-20d8-4825-9d3f-4f332a016b70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.403540832s STEP: Saw pod success May 1 00:04:28.974: INFO: Pod "downwardapi-volume-1c03f80c-20d8-4825-9d3f-4f332a016b70" satisfied condition "Succeeded or Failed" May 1 00:04:28.977: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-1c03f80c-20d8-4825-9d3f-4f332a016b70 container client-container: STEP: delete the pod May 1 00:04:29.094: INFO: Waiting for pod downwardapi-volume-1c03f80c-20d8-4825-9d3f-4f332a016b70 to disappear May 1 00:04:29.102: INFO: Pod downwardapi-volume-1c03f80c-20d8-4825-9d3f-4f332a016b70 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:04:29.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4089" for this suite. • [SLOW TEST:6.600 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":290,"completed":63,"skipped":1149,"failed":0} SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:04:29.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 1 00:04:29.297: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:04:40.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8715" for this suite. • [SLOW TEST:11.525 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":290,"completed":64,"skipped":1157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:04:40.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-109c000c-453d-4f7b-8910-df5075346af1 May 1 00:04:41.677: INFO: Pod name my-hostname-basic-109c000c-453d-4f7b-8910-df5075346af1: Found 0 pods out of 1 May 1 00:04:46.681: INFO: Pod name my-hostname-basic-109c000c-453d-4f7b-8910-df5075346af1: Found 1 pods out of 1 May 1 00:04:46.681: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-109c000c-453d-4f7b-8910-df5075346af1" are running May 1 00:04:46.684: INFO: Pod "my-hostname-basic-109c000c-453d-4f7b-8910-df5075346af1-422p7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 00:04:41 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 00:04:46 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 00:04:46 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 00:04:41 +0000 UTC Reason: Message:}]) May 1 00:04:46.684: INFO: Trying to dial the pod May 1 00:04:51.704: INFO: Controller my-hostname-basic-109c000c-453d-4f7b-8910-df5075346af1: Got expected result from replica 1 [my-hostname-basic-109c000c-453d-4f7b-8910-df5075346af1-422p7]: "my-hostname-basic-109c000c-453d-4f7b-8910-df5075346af1-422p7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:04:51.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3802" for this suite. • [SLOW TEST:11.072 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":290,"completed":65,"skipped":1195,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:04:51.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 1 00:04:51.772: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5869cc65-ec3e-484e-b050-3a6628cc5fb4" in namespace "projected-5277" to be "Succeeded or Failed" May 1 00:04:51.791: INFO: Pod "downwardapi-volume-5869cc65-ec3e-484e-b050-3a6628cc5fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.980365ms May 1 00:04:53.795: INFO: Pod "downwardapi-volume-5869cc65-ec3e-484e-b050-3a6628cc5fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02258807s May 1 00:04:55.823: INFO: Pod "downwardapi-volume-5869cc65-ec3e-484e-b050-3a6628cc5fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051189795s May 1 00:04:58.107: INFO: Pod "downwardapi-volume-5869cc65-ec3e-484e-b050-3a6628cc5fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.334825382s May 1 00:05:00.111: INFO: Pod "downwardapi-volume-5869cc65-ec3e-484e-b050-3a6628cc5fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.33913041s May 1 00:05:02.359: INFO: Pod "downwardapi-volume-5869cc65-ec3e-484e-b050-3a6628cc5fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.587044437s May 1 00:05:04.363: INFO: Pod "downwardapi-volume-5869cc65-ec3e-484e-b050-3a6628cc5fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.59073443s May 1 00:05:06.667: INFO: Pod "downwardapi-volume-5869cc65-ec3e-484e-b050-3a6628cc5fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.895188216s May 1 00:05:09.012: INFO: Pod "downwardapi-volume-5869cc65-ec3e-484e-b050-3a6628cc5fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 17.239682936s May 1 00:05:11.016: INFO: Pod "downwardapi-volume-5869cc65-ec3e-484e-b050-3a6628cc5fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.243575576s May 1 00:05:13.260: INFO: Pod "downwardapi-volume-5869cc65-ec3e-484e-b050-3a6628cc5fb4": Phase="Running", Reason="", readiness=true. Elapsed: 21.488169879s May 1 00:05:15.494: INFO: Pod "downwardapi-volume-5869cc65-ec3e-484e-b050-3a6628cc5fb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.722132938s STEP: Saw pod success May 1 00:05:15.494: INFO: Pod "downwardapi-volume-5869cc65-ec3e-484e-b050-3a6628cc5fb4" satisfied condition "Succeeded or Failed" May 1 00:05:15.498: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-5869cc65-ec3e-484e-b050-3a6628cc5fb4 container client-container: STEP: delete the pod May 1 00:05:16.381: INFO: Waiting for pod downwardapi-volume-5869cc65-ec3e-484e-b050-3a6628cc5fb4 to disappear May 1 00:05:16.422: INFO: Pod downwardapi-volume-5869cc65-ec3e-484e-b050-3a6628cc5fb4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:05:16.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5277" for this suite. • [SLOW TEST:24.717 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":290,"completed":66,"skipped":1199,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:05:16.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 1 00:05:16.556: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:05:30.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7483" for this suite. • [SLOW TEST:14.173 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":290,"completed":67,"skipped":1223,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:05:30.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 1 00:05:30.705: INFO: Waiting up to 5m0s for pod "pod-28b41006-5fbc-4c8b-a732-930a2cb26b67" in namespace "emptydir-9914" to be "Succeeded or Failed" May 1 00:05:30.707: INFO: Pod "pod-28b41006-5fbc-4c8b-a732-930a2cb26b67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325732ms May 1 00:05:32.711: INFO: Pod "pod-28b41006-5fbc-4c8b-a732-930a2cb26b67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006827995s May 1 00:05:34.715: INFO: Pod "pod-28b41006-5fbc-4c8b-a732-930a2cb26b67": Phase="Running", Reason="", readiness=true. Elapsed: 4.010814587s May 1 00:05:36.841: INFO: Pod "pod-28b41006-5fbc-4c8b-a732-930a2cb26b67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.136878885s STEP: Saw pod success May 1 00:05:36.841: INFO: Pod "pod-28b41006-5fbc-4c8b-a732-930a2cb26b67" satisfied condition "Succeeded or Failed" May 1 00:05:36.844: INFO: Trying to get logs from node latest-worker pod pod-28b41006-5fbc-4c8b-a732-930a2cb26b67 container test-container: STEP: delete the pod May 1 00:05:36.998: INFO: Waiting for pod pod-28b41006-5fbc-4c8b-a732-930a2cb26b67 to disappear May 1 00:05:37.014: INFO: Pod pod-28b41006-5fbc-4c8b-a732-930a2cb26b67 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:05:37.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9914" for this suite. • [SLOW TEST:6.419 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":68,"skipped":1265,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:05:37.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7780 STEP: creating a selector STEP: Creating the service pods in kubernetes May 1 00:05:37.131: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 1 00:05:37.183: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 1 00:05:39.368: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 1 00:05:41.195: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 1 00:05:43.188: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 00:05:45.188: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 00:05:47.187: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 00:05:49.187: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 00:05:51.187: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 00:05:53.186: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 00:05:55.187: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 00:05:57.204: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 00:05:59.186: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 00:06:01.187: INFO: The status of Pod netserver-0 is Running (Ready = true) May 1 00:06:01.192: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 1 00:06:07.218: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.25:8080/dial?request=hostname&protocol=udp&host=10.244.1.74&port=8081&tries=1'] Namespace:pod-network-test-7780 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:06:07.218: INFO: >>> kubeConfig: /root/.kube/config I0501 00:06:07.275291 7 log.go:172] (0xc002e0f130) (0xc001f86140) Create stream I0501 00:06:07.275335 7 log.go:172] (0xc002e0f130) (0xc001f86140) Stream added, broadcasting: 1 I0501 00:06:07.277719 7 log.go:172] (0xc002e0f130) Reply frame received for 1 I0501 00:06:07.277781 7 log.go:172] (0xc002e0f130) (0xc002e02000) Create stream I0501 00:06:07.277799 7 log.go:172] (0xc002e0f130) (0xc002e02000) Stream added, broadcasting: 3 I0501 00:06:07.278795 7 log.go:172] (0xc002e0f130) Reply frame received for 3 I0501 00:06:07.278823 7 log.go:172] (0xc002e0f130) (0xc0023fec80) Create stream I0501 00:06:07.278836 7 log.go:172] (0xc002e0f130) (0xc0023fec80) Stream added, broadcasting: 5 I0501 00:06:07.279666 7 log.go:172] (0xc002e0f130) Reply frame received for 5 I0501 00:06:07.350414 7 log.go:172] (0xc002e0f130) Data frame received for 3 I0501 00:06:07.350435 7 log.go:172] (0xc002e02000) (3) Data frame handling I0501 00:06:07.350442 7 log.go:172] (0xc002e02000) (3) Data frame sent I0501 00:06:07.351458 7 log.go:172] (0xc002e0f130) Data frame received for 5 I0501 00:06:07.351500 7 log.go:172] (0xc0023fec80) (5) Data frame handling I0501 00:06:07.351527 7 log.go:172] (0xc002e0f130) Data frame received for 3 I0501 00:06:07.351547 7 log.go:172] (0xc002e02000) (3) Data frame handling I0501 00:06:07.353688 7 log.go:172] (0xc002e0f130) Data frame received for 1 I0501 00:06:07.353706 7 log.go:172] (0xc001f86140) (1) Data frame handling I0501 00:06:07.353717 7 log.go:172] (0xc001f86140) (1) Data frame sent I0501 00:06:07.353728 7 log.go:172] (0xc002e0f130) (0xc001f86140) Stream removed, broadcasting: 1 I0501 00:06:07.353741 7 log.go:172] (0xc002e0f130) Go away received I0501 00:06:07.354079 7 log.go:172] (0xc002e0f130) (0xc001f86140) Stream removed, broadcasting: 1 I0501 00:06:07.354098 7 log.go:172] (0xc002e0f130) (0xc002e02000) Stream removed, broadcasting: 3 I0501 00:06:07.354106 7 log.go:172] (0xc002e0f130) (0xc0023fec80) Stream removed, broadcasting: 5 May 1 00:06:07.354: INFO: Waiting for responses: map[] May 1 00:06:07.357: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.25:8080/dial?request=hostname&protocol=udp&host=10.244.2.24&port=8081&tries=1'] Namespace:pod-network-test-7780 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:06:07.357: INFO: >>> kubeConfig: /root/.kube/config I0501 00:06:07.385705 7 log.go:172] (0xc002f22580) (0xc0023ff040) Create stream I0501 00:06:07.385734 7 log.go:172] (0xc002f22580) (0xc0023ff040) Stream added, broadcasting: 1 I0501 00:06:07.387646 7 log.go:172] (0xc002f22580) Reply frame received for 1 I0501 00:06:07.387679 7 log.go:172] (0xc002f22580) (0xc001f86780) Create stream I0501 00:06:07.387693 7 log.go:172] (0xc002f22580) (0xc001f86780) Stream added, broadcasting: 3 I0501 00:06:07.388708 7 log.go:172] (0xc002f22580) Reply frame received for 3 I0501 00:06:07.388738 7 log.go:172] (0xc002f22580) (0xc00151e000) Create stream I0501 00:06:07.388748 7 log.go:172] (0xc002f22580) (0xc00151e000) Stream added, broadcasting: 5 I0501 00:06:07.389842 7 log.go:172] (0xc002f22580) Reply frame received for 5 I0501 00:06:07.496676 7 log.go:172] (0xc002f22580) Data frame received for 3 I0501 00:06:07.496702 7 log.go:172] (0xc001f86780) (3) Data frame handling I0501 00:06:07.496725 7 log.go:172] (0xc001f86780) (3) Data frame sent I0501 00:06:07.497336 7 log.go:172] (0xc002f22580) Data frame received for 5 I0501 00:06:07.497377 7 log.go:172] (0xc00151e000) (5) Data frame handling I0501 00:06:07.497410 7 log.go:172] (0xc002f22580) Data frame received for 3 I0501 00:06:07.497424 7 log.go:172] (0xc001f86780) (3) Data frame handling I0501 00:06:07.499192 7 log.go:172] (0xc002f22580) Data frame received for 1 I0501 00:06:07.499211 7 log.go:172] (0xc0023ff040) (1) Data frame handling I0501 00:06:07.499224 7 log.go:172] (0xc0023ff040) (1) Data frame sent I0501 00:06:07.499509 7 log.go:172] (0xc002f22580) (0xc0023ff040) Stream removed, broadcasting: 1 I0501 00:06:07.499530 7 log.go:172] (0xc002f22580) Go away received I0501 00:06:07.499632 7 log.go:172] (0xc002f22580) (0xc0023ff040) Stream removed, broadcasting: 1 I0501 00:06:07.499660 7 log.go:172] (0xc002f22580) (0xc001f86780) Stream removed, broadcasting: 3 I0501 00:06:07.499697 7 log.go:172] (0xc002f22580) (0xc00151e000) Stream removed, broadcasting: 5 May 1 00:06:07.499: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:06:07.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7780" for this suite. • [SLOW TEST:30.486 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":290,"completed":69,"skipped":1280,"failed":0} SSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:06:07.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-7e0bf85c-668e-43a5-a0b8-e9b6a26e5f55 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:06:07.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5648" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":290,"completed":70,"skipped":1288,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:06:07.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:06:07.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9181" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":290,"completed":71,"skipped":1318,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:06:07.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 1 00:06:07.826: INFO: Waiting up to 5m0s for pod "pod-b6eb4828-8b9a-40c2-a7f5-2bc6bd544c70" in namespace "emptydir-7538" to be "Succeeded or Failed" May 1 00:06:07.837: INFO: Pod "pod-b6eb4828-8b9a-40c2-a7f5-2bc6bd544c70": Phase="Pending", Reason="", readiness=false. Elapsed: 11.82374ms May 1 00:06:09.841: INFO: Pod "pod-b6eb4828-8b9a-40c2-a7f5-2bc6bd544c70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015719425s May 1 00:06:11.846: INFO: Pod "pod-b6eb4828-8b9a-40c2-a7f5-2bc6bd544c70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020010068s May 1 00:06:13.956: INFO: Pod "pod-b6eb4828-8b9a-40c2-a7f5-2bc6bd544c70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.130597419s STEP: Saw pod success May 1 00:06:13.956: INFO: Pod "pod-b6eb4828-8b9a-40c2-a7f5-2bc6bd544c70" satisfied condition "Succeeded or Failed" May 1 00:06:13.959: INFO: Trying to get logs from node latest-worker pod pod-b6eb4828-8b9a-40c2-a7f5-2bc6bd544c70 container test-container: STEP: delete the pod May 1 00:06:14.005: INFO: Waiting for pod pod-b6eb4828-8b9a-40c2-a7f5-2bc6bd544c70 to disappear May 1 00:06:14.159: INFO: Pod pod-b6eb4828-8b9a-40c2-a7f5-2bc6bd544c70 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:06:14.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7538" for this suite. • [SLOW TEST:6.512 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":72,"skipped":1340,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:06:14.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 1 00:06:14.838: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 1 00:06:26.486: INFO: >>> kubeConfig: /root/.kube/config May 1 00:06:29.412: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:06:40.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4721" for this suite. • [SLOW TEST:25.941 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":290,"completed":73,"skipped":1340,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:06:40.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 1 00:06:40.214: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 1 00:06:40.223: INFO: Waiting for terminating namespaces to be deleted... May 1 00:06:40.225: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 1 00:06:40.229: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 1 00:06:40.229: INFO: Container kindnet-cni ready: true, restart count 0 May 1 00:06:40.229: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 1 00:06:40.229: INFO: Container kube-proxy ready: true, restart count 0 May 1 00:06:40.229: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 1 00:06:40.232: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 1 00:06:40.232: INFO: Container kindnet-cni ready: true, restart count 0 May 1 00:06:40.232: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 1 00:06:40.233: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-64db9be0-2b34-4a45-b8e5-98cbe519cd9f 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-64db9be0-2b34-4a45-b8e5-98cbe519cd9f off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-64db9be0-2b34-4a45-b8e5-98cbe519cd9f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:07:20.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-903" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:40.374 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":290,"completed":74,"skipped":1357,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:07:20.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-1fbd09cd-e78e-49ac-a870-4b86296c0d9a STEP: Creating a pod to test consume secrets May 1 00:07:20.619: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cd4eb6cf-b25b-45c8-81bf-84b2ca36fb3f" in namespace "projected-3251" to be "Succeeded or Failed" May 1 00:07:20.640: INFO: Pod "pod-projected-secrets-cd4eb6cf-b25b-45c8-81bf-84b2ca36fb3f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.163713ms May 1 00:07:22.645: INFO: Pod "pod-projected-secrets-cd4eb6cf-b25b-45c8-81bf-84b2ca36fb3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025596358s May 1 00:07:24.648: INFO: Pod "pod-projected-secrets-cd4eb6cf-b25b-45c8-81bf-84b2ca36fb3f": Phase="Running", Reason="", readiness=true. Elapsed: 4.029515836s May 1 00:07:26.669: INFO: Pod "pod-projected-secrets-cd4eb6cf-b25b-45c8-81bf-84b2ca36fb3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049803984s STEP: Saw pod success May 1 00:07:26.669: INFO: Pod "pod-projected-secrets-cd4eb6cf-b25b-45c8-81bf-84b2ca36fb3f" satisfied condition "Succeeded or Failed" May 1 00:07:26.671: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-cd4eb6cf-b25b-45c8-81bf-84b2ca36fb3f container projected-secret-volume-test: STEP: delete the pod May 1 00:07:27.034: INFO: Waiting for pod pod-projected-secrets-cd4eb6cf-b25b-45c8-81bf-84b2ca36fb3f to disappear May 1 00:07:27.037: INFO: Pod pod-projected-secrets-cd4eb6cf-b25b-45c8-81bf-84b2ca36fb3f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:07:27.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3251" for this suite. • [SLOW TEST:6.540 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":290,"completed":75,"skipped":1366,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:07:27.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-b62e09d4-2929-4275-846a-e7d795404ddf STEP: Creating a pod to test consume configMaps May 1 00:07:27.682: INFO: Waiting up to 5m0s for pod "pod-configmaps-8222bb0b-9205-47a2-98ee-046d196b08df" in namespace "configmap-7991" to be "Succeeded or Failed" May 1 00:07:27.777: INFO: Pod "pod-configmaps-8222bb0b-9205-47a2-98ee-046d196b08df": Phase="Pending", Reason="", readiness=false. Elapsed: 94.661616ms May 1 00:07:30.455: INFO: Pod "pod-configmaps-8222bb0b-9205-47a2-98ee-046d196b08df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.772747264s May 1 00:07:32.458: INFO: Pod "pod-configmaps-8222bb0b-9205-47a2-98ee-046d196b08df": Phase="Running", Reason="", readiness=true. Elapsed: 4.775692658s May 1 00:07:34.462: INFO: Pod "pod-configmaps-8222bb0b-9205-47a2-98ee-046d196b08df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.779482916s STEP: Saw pod success May 1 00:07:34.462: INFO: Pod "pod-configmaps-8222bb0b-9205-47a2-98ee-046d196b08df" satisfied condition "Succeeded or Failed" May 1 00:07:34.464: INFO: Trying to get logs from node latest-worker pod pod-configmaps-8222bb0b-9205-47a2-98ee-046d196b08df container configmap-volume-test: STEP: delete the pod May 1 00:07:34.513: INFO: Waiting for pod pod-configmaps-8222bb0b-9205-47a2-98ee-046d196b08df to disappear May 1 00:07:34.548: INFO: Pod pod-configmaps-8222bb0b-9205-47a2-98ee-046d196b08df no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:07:34.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7991" for this suite. • [SLOW TEST:7.511 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":290,"completed":76,"skipped":1379,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:07:34.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 1 00:07:42.496: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:07:43.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8363" for this suite. • [SLOW TEST:8.645 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":290,"completed":77,"skipped":1391,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:07:43.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:07:48.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8322" for this suite. • [SLOW TEST:5.726 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":78,"skipped":1397,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:07:48.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 1 00:07:49.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' May 1 00:07:49.229: INFO: stderr: "" May 1 00:07:49.229: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.2.232+a26c34e47007df\", GitCommit:\"a26c34e47007dfa26378a7fac5296763df476d11\", GitTreeState:\"clean\", BuildDate:\"2020-04-29T19:21:59Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:07:49.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4194" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":290,"completed":79,"skipped":1409,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:07:49.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 1 00:07:49.333: INFO: Waiting up to 5m0s for pod "pod-14374472-cb97-4f65-b7ec-15bbda9c1bcb" in namespace "emptydir-3118" to be "Succeeded or Failed" May 1 00:07:49.345: INFO: Pod "pod-14374472-cb97-4f65-b7ec-15bbda9c1bcb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.475004ms May 1 00:07:52.374: INFO: Pod "pod-14374472-cb97-4f65-b7ec-15bbda9c1bcb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.040857621s May 1 00:07:54.428: INFO: Pod "pod-14374472-cb97-4f65-b7ec-15bbda9c1bcb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.094569898s May 1 00:07:56.431: INFO: Pod "pod-14374472-cb97-4f65-b7ec-15bbda9c1bcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.097714998s STEP: Saw pod success May 1 00:07:56.431: INFO: Pod "pod-14374472-cb97-4f65-b7ec-15bbda9c1bcb" satisfied condition "Succeeded or Failed" May 1 00:07:56.433: INFO: Trying to get logs from node latest-worker2 pod pod-14374472-cb97-4f65-b7ec-15bbda9c1bcb container test-container: STEP: delete the pod May 1 00:07:57.781: INFO: Waiting for pod pod-14374472-cb97-4f65-b7ec-15bbda9c1bcb to disappear May 1 00:07:58.263: INFO: Pod pod-14374472-cb97-4f65-b7ec-15bbda9c1bcb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:07:58.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3118" for this suite. • [SLOW TEST:9.308 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":80,"skipped":1430,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:07:58.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 1 00:08:00.023: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 1 00:08:02.033: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888480, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888480, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888480, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888479, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 00:08:04.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888480, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888480, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888480, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888479, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 00:08:06.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888480, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888480, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888480, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888479, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 00:08:08.118: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888480, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888480, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888480, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888479, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 00:08:10.334: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888480, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888480, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888480, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888479, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 00:08:12.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888480, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888480, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888480, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888479, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 00:08:15.124: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 1 00:08:15.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:08:16.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4806" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:17.916 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":290,"completed":81,"skipped":1466,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:08:16.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-kmbd STEP: Creating a pod to test atomic-volume-subpath May 1 00:08:16.545: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-kmbd" in namespace "subpath-8642" to be "Succeeded or Failed" May 1 00:08:16.574: INFO: Pod "pod-subpath-test-projected-kmbd": Phase="Pending", Reason="", readiness=false. Elapsed: 28.999281ms May 1 00:08:18.614: INFO: Pod "pod-subpath-test-projected-kmbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068986806s May 1 00:08:21.353: INFO: Pod "pod-subpath-test-projected-kmbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.807397871s May 1 00:08:23.400: INFO: Pod "pod-subpath-test-projected-kmbd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.85429291s May 1 00:08:25.403: INFO: Pod "pod-subpath-test-projected-kmbd": Phase="Running", Reason="", readiness=true. Elapsed: 8.857964189s May 1 00:08:27.407: INFO: Pod "pod-subpath-test-projected-kmbd": Phase="Running", Reason="", readiness=true. Elapsed: 10.861546065s May 1 00:08:29.410: INFO: Pod "pod-subpath-test-projected-kmbd": Phase="Running", Reason="", readiness=true. Elapsed: 12.864947344s May 1 00:08:31.413: INFO: Pod "pod-subpath-test-projected-kmbd": Phase="Running", Reason="", readiness=true. Elapsed: 14.867629338s May 1 00:08:33.415: INFO: Pod "pod-subpath-test-projected-kmbd": Phase="Running", Reason="", readiness=true. Elapsed: 16.869893781s May 1 00:08:35.419: INFO: Pod "pod-subpath-test-projected-kmbd": Phase="Running", Reason="", readiness=true. Elapsed: 18.873469575s May 1 00:08:37.422: INFO: Pod "pod-subpath-test-projected-kmbd": Phase="Running", Reason="", readiness=true. Elapsed: 20.876869815s May 1 00:08:39.425: INFO: Pod "pod-subpath-test-projected-kmbd": Phase="Running", Reason="", readiness=true. Elapsed: 22.879465748s May 1 00:08:41.429: INFO: Pod "pod-subpath-test-projected-kmbd": Phase="Running", Reason="", readiness=true. Elapsed: 24.883206249s May 1 00:08:43.432: INFO: Pod "pod-subpath-test-projected-kmbd": Phase="Running", Reason="", readiness=true. Elapsed: 26.886185524s May 1 00:08:45.435: INFO: Pod "pod-subpath-test-projected-kmbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.889965538s STEP: Saw pod success May 1 00:08:45.435: INFO: Pod "pod-subpath-test-projected-kmbd" satisfied condition "Succeeded or Failed" May 1 00:08:45.438: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-kmbd container test-container-subpath-projected-kmbd: STEP: delete the pod May 1 00:08:45.495: INFO: Waiting for pod pod-subpath-test-projected-kmbd to disappear May 1 00:08:45.502: INFO: Pod pod-subpath-test-projected-kmbd no longer exists STEP: Deleting pod pod-subpath-test-projected-kmbd May 1 00:08:45.502: INFO: Deleting pod "pod-subpath-test-projected-kmbd" in namespace "subpath-8642" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:08:45.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8642" for this suite. • [SLOW TEST:29.053 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":290,"completed":82,"skipped":1483,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:08:45.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1931.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1931.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1931.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1931.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1931.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1931.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 00:08:51.649: INFO: DNS probes using dns-1931/dns-test-b6193e45-1bdf-4148-a108-48fa58d0d151 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:08:51.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1931" for this suite. • [SLOW TEST:6.225 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":290,"completed":83,"skipped":1493,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:08:51.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 1 00:08:52.384: INFO: Waiting up to 5m0s for pod "downwardapi-volume-baa85b01-beab-49f7-9f6b-d64d03a0ed10" in namespace "downward-api-1503" to be "Succeeded or Failed" May 1 00:08:52.394: INFO: Pod "downwardapi-volume-baa85b01-beab-49f7-9f6b-d64d03a0ed10": Phase="Pending", Reason="", readiness=false. Elapsed: 10.09045ms May 1 00:08:54.397: INFO: Pod "downwardapi-volume-baa85b01-beab-49f7-9f6b-d64d03a0ed10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013271334s May 1 00:08:56.447: INFO: Pod "downwardapi-volume-baa85b01-beab-49f7-9f6b-d64d03a0ed10": Phase="Running", Reason="", readiness=true. Elapsed: 4.063880662s May 1 00:08:58.451: INFO: Pod "downwardapi-volume-baa85b01-beab-49f7-9f6b-d64d03a0ed10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.067502327s STEP: Saw pod success May 1 00:08:58.451: INFO: Pod "downwardapi-volume-baa85b01-beab-49f7-9f6b-d64d03a0ed10" satisfied condition "Succeeded or Failed" May 1 00:08:58.454: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-baa85b01-beab-49f7-9f6b-d64d03a0ed10 container client-container: STEP: delete the pod May 1 00:08:58.491: INFO: Waiting for pod downwardapi-volume-baa85b01-beab-49f7-9f6b-d64d03a0ed10 to disappear May 1 00:08:58.496: INFO: Pod downwardapi-volume-baa85b01-beab-49f7-9f6b-d64d03a0ed10 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:08:58.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1503" for this suite. • [SLOW TEST:6.792 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":290,"completed":84,"skipped":1501,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:08:58.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 1 00:08:58.587: INFO: Waiting up to 5m0s for pod "var-expansion-6ab17998-421f-4044-a25f-c682f58cf820" in namespace "var-expansion-5382" to be "Succeeded or Failed" May 1 00:08:58.593: INFO: Pod "var-expansion-6ab17998-421f-4044-a25f-c682f58cf820": Phase="Pending", Reason="", readiness=false. Elapsed: 5.208694ms May 1 00:09:00.595: INFO: Pod "var-expansion-6ab17998-421f-4044-a25f-c682f58cf820": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008155084s May 1 00:09:02.599: INFO: Pod "var-expansion-6ab17998-421f-4044-a25f-c682f58cf820": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011501193s STEP: Saw pod success May 1 00:09:02.599: INFO: Pod "var-expansion-6ab17998-421f-4044-a25f-c682f58cf820" satisfied condition "Succeeded or Failed" May 1 00:09:02.624: INFO: Trying to get logs from node latest-worker2 pod var-expansion-6ab17998-421f-4044-a25f-c682f58cf820 container dapi-container: STEP: delete the pod May 1 00:09:02.641: INFO: Waiting for pod var-expansion-6ab17998-421f-4044-a25f-c682f58cf820 to disappear May 1 00:09:02.662: INFO: Pod var-expansion-6ab17998-421f-4044-a25f-c682f58cf820 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:09:02.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5382" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":290,"completed":85,"skipped":1519,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:09:02.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-27a5a15f-ba30-43d2-a78e-d7443f0aee66 STEP: Creating a pod to test consume configMaps May 1 00:09:02.764: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-71e20185-4ddf-4c0c-b25c-3e8c44251bff" in namespace "projected-8880" to be "Succeeded or Failed" May 1 00:09:02.795: INFO: Pod "pod-projected-configmaps-71e20185-4ddf-4c0c-b25c-3e8c44251bff": Phase="Pending", Reason="", readiness=false. Elapsed: 30.403606ms May 1 00:09:04.798: INFO: Pod "pod-projected-configmaps-71e20185-4ddf-4c0c-b25c-3e8c44251bff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033789128s May 1 00:09:06.802: INFO: Pod "pod-projected-configmaps-71e20185-4ddf-4c0c-b25c-3e8c44251bff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037520975s STEP: Saw pod success May 1 00:09:06.802: INFO: Pod "pod-projected-configmaps-71e20185-4ddf-4c0c-b25c-3e8c44251bff" satisfied condition "Succeeded or Failed" May 1 00:09:06.804: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-71e20185-4ddf-4c0c-b25c-3e8c44251bff container projected-configmap-volume-test: STEP: delete the pod May 1 00:09:06.840: INFO: Waiting for pod pod-projected-configmaps-71e20185-4ddf-4c0c-b25c-3e8c44251bff to disappear May 1 00:09:06.849: INFO: Pod pod-projected-configmaps-71e20185-4ddf-4c0c-b25c-3e8c44251bff no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:09:06.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8880" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":290,"completed":86,"skipped":1526,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:09:06.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 1 00:09:06.941: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd781b89-41ef-4705-9ae4-c600de2fe4eb" in namespace "downward-api-8524" to be "Succeeded or Failed" May 1 00:09:07.017: INFO: Pod "downwardapi-volume-bd781b89-41ef-4705-9ae4-c600de2fe4eb": Phase="Pending", Reason="", readiness=false. Elapsed: 75.258494ms May 1 00:09:09.020: INFO: Pod "downwardapi-volume-bd781b89-41ef-4705-9ae4-c600de2fe4eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078956999s May 1 00:09:11.034: INFO: Pod "downwardapi-volume-bd781b89-41ef-4705-9ae4-c600de2fe4eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092956224s May 1 00:09:13.038: INFO: Pod "downwardapi-volume-bd781b89-41ef-4705-9ae4-c600de2fe4eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.096590925s STEP: Saw pod success May 1 00:09:13.038: INFO: Pod "downwardapi-volume-bd781b89-41ef-4705-9ae4-c600de2fe4eb" satisfied condition "Succeeded or Failed" May 1 00:09:13.040: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-bd781b89-41ef-4705-9ae4-c600de2fe4eb container client-container: STEP: delete the pod May 1 00:09:13.217: INFO: Waiting for pod downwardapi-volume-bd781b89-41ef-4705-9ae4-c600de2fe4eb to disappear May 1 00:09:13.364: INFO: Pod downwardapi-volume-bd781b89-41ef-4705-9ae4-c600de2fe4eb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:09:13.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8524" for this suite. • [SLOW TEST:6.493 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":290,"completed":87,"skipped":1532,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:09:13.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:09:24.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3643" for this suite. • [SLOW TEST:11.202 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":290,"completed":88,"skipped":1566,"failed":0} SSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:09:24.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-b4c46ce7-0f4b-4c24-be53-599f10aaa885 STEP: Creating secret with name s-test-opt-upd-81db7c8c-74c0-4b40-8cc2-fbde4805fb3b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b4c46ce7-0f4b-4c24-be53-599f10aaa885 STEP: Updating secret s-test-opt-upd-81db7c8c-74c0-4b40-8cc2-fbde4805fb3b STEP: Creating secret with name s-test-opt-create-b93fb2d9-991c-4a9a-9452-344515a949d9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:10:43.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5783" for this suite. • [SLOW TEST:78.940 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":290,"completed":89,"skipped":1569,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:10:43.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 1 00:10:43.587: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:11:07.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2527" for this suite. • [SLOW TEST:23.959 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":290,"completed":90,"skipped":1584,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:11:07.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 1 00:11:07.531: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ecaa3e36-99bd-4e16-983b-165138d57c25" in namespace "projected-5943" to be "Succeeded or Failed" May 1 00:11:07.536: INFO: Pod "downwardapi-volume-ecaa3e36-99bd-4e16-983b-165138d57c25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.463842ms May 1 00:11:09.938: INFO: Pod "downwardapi-volume-ecaa3e36-99bd-4e16-983b-165138d57c25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.407035163s May 1 00:11:11.942: INFO: Pod "downwardapi-volume-ecaa3e36-99bd-4e16-983b-165138d57c25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.410753625s May 1 00:11:14.263: INFO: Pod "downwardapi-volume-ecaa3e36-99bd-4e16-983b-165138d57c25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.731930823s May 1 00:11:16.464: INFO: Pod "downwardapi-volume-ecaa3e36-99bd-4e16-983b-165138d57c25": Phase="Running", Reason="", readiness=true. Elapsed: 8.932221355s May 1 00:11:18.533: INFO: Pod "downwardapi-volume-ecaa3e36-99bd-4e16-983b-165138d57c25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.001380709s STEP: Saw pod success May 1 00:11:18.533: INFO: Pod "downwardapi-volume-ecaa3e36-99bd-4e16-983b-165138d57c25" satisfied condition "Succeeded or Failed" May 1 00:11:18.536: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ecaa3e36-99bd-4e16-983b-165138d57c25 container client-container: STEP: delete the pod May 1 00:11:19.090: INFO: Waiting for pod downwardapi-volume-ecaa3e36-99bd-4e16-983b-165138d57c25 to disappear May 1 00:11:19.092: INFO: Pod downwardapi-volume-ecaa3e36-99bd-4e16-983b-165138d57c25 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:11:19.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5943" for this suite. • [SLOW TEST:11.620 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":290,"completed":91,"skipped":1609,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:11:19.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 1 00:11:19.188: INFO: Waiting up to 5m0s for pod "downward-api-492e479e-f8eb-4cc7-a39b-eaf5601237b4" in namespace "downward-api-6340" to be "Succeeded or Failed" May 1 00:11:19.227: INFO: Pod "downward-api-492e479e-f8eb-4cc7-a39b-eaf5601237b4": Phase="Pending", Reason="", readiness=false. Elapsed: 38.524893ms May 1 00:11:21.562: INFO: Pod "downward-api-492e479e-f8eb-4cc7-a39b-eaf5601237b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.373516005s May 1 00:11:23.565: INFO: Pod "downward-api-492e479e-f8eb-4cc7-a39b-eaf5601237b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.376063686s May 1 00:11:26.372: INFO: Pod "downward-api-492e479e-f8eb-4cc7-a39b-eaf5601237b4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.183781062s May 1 00:11:28.376: INFO: Pod "downward-api-492e479e-f8eb-4cc7-a39b-eaf5601237b4": Phase="Running", Reason="", readiness=true. Elapsed: 9.187662328s May 1 00:11:30.380: INFO: Pod "downward-api-492e479e-f8eb-4cc7-a39b-eaf5601237b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.191251335s STEP: Saw pod success May 1 00:11:30.380: INFO: Pod "downward-api-492e479e-f8eb-4cc7-a39b-eaf5601237b4" satisfied condition "Succeeded or Failed" May 1 00:11:30.382: INFO: Trying to get logs from node latest-worker2 pod downward-api-492e479e-f8eb-4cc7-a39b-eaf5601237b4 container dapi-container: STEP: delete the pod May 1 00:11:30.444: INFO: Waiting for pod downward-api-492e479e-f8eb-4cc7-a39b-eaf5601237b4 to disappear May 1 00:11:30.458: INFO: Pod downward-api-492e479e-f8eb-4cc7-a39b-eaf5601237b4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:11:30.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6340" for this suite. • [SLOW TEST:11.368 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":290,"completed":92,"skipped":1633,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:11:30.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 1 00:11:30.651: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6973 /api/v1/namespaces/watch-6973/configmaps/e2e-watch-test-configmap-a f0ce52d9-8ea9-413d-b408-e9d2fc562873 450245 0 2020-05-01 00:11:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-01 00:11:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 1 00:11:30.651: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6973 /api/v1/namespaces/watch-6973/configmaps/e2e-watch-test-configmap-a f0ce52d9-8ea9-413d-b408-e9d2fc562873 450245 0 2020-05-01 00:11:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-01 00:11:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 1 00:11:40.659: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6973 /api/v1/namespaces/watch-6973/configmaps/e2e-watch-test-configmap-a f0ce52d9-8ea9-413d-b408-e9d2fc562873 450283 0 2020-05-01 00:11:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-01 00:11:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 1 00:11:40.660: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6973 /api/v1/namespaces/watch-6973/configmaps/e2e-watch-test-configmap-a f0ce52d9-8ea9-413d-b408-e9d2fc562873 450283 0 2020-05-01 00:11:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-01 00:11:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 1 00:11:50.666: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6973 /api/v1/namespaces/watch-6973/configmaps/e2e-watch-test-configmap-a f0ce52d9-8ea9-413d-b408-e9d2fc562873 450313 0 2020-05-01 00:11:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-01 00:11:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 1 00:11:50.666: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6973 /api/v1/namespaces/watch-6973/configmaps/e2e-watch-test-configmap-a f0ce52d9-8ea9-413d-b408-e9d2fc562873 450313 0 2020-05-01 00:11:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-01 00:11:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 1 00:12:00.674: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6973 /api/v1/namespaces/watch-6973/configmaps/e2e-watch-test-configmap-a f0ce52d9-8ea9-413d-b408-e9d2fc562873 450343 0 2020-05-01 00:11:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-01 00:11:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 1 00:12:00.674: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6973 /api/v1/namespaces/watch-6973/configmaps/e2e-watch-test-configmap-a f0ce52d9-8ea9-413d-b408-e9d2fc562873 450343 0 2020-05-01 00:11:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-01 00:11:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 1 00:12:10.681: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6973 /api/v1/namespaces/watch-6973/configmaps/e2e-watch-test-configmap-b 6ea94642-2d22-4f23-a39b-133e0f66482f 450373 0 2020-05-01 00:12:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-01 00:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 1 00:12:10.681: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6973 /api/v1/namespaces/watch-6973/configmaps/e2e-watch-test-configmap-b 6ea94642-2d22-4f23-a39b-133e0f66482f 450373 0 2020-05-01 00:12:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-01 00:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 1 00:12:20.687: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6973 /api/v1/namespaces/watch-6973/configmaps/e2e-watch-test-configmap-b 6ea94642-2d22-4f23-a39b-133e0f66482f 450403 0 2020-05-01 00:12:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-01 00:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 1 00:12:20.687: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6973 /api/v1/namespaces/watch-6973/configmaps/e2e-watch-test-configmap-b 6ea94642-2d22-4f23-a39b-133e0f66482f 450403 0 2020-05-01 00:12:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-01 00:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:12:30.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6973" for this suite. • [SLOW TEST:60.233 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":290,"completed":93,"skipped":1643,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:12:30.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 1 00:12:31.332: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:12:32.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2015" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":290,"completed":94,"skipped":1645,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:12:32.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-816b8742-5558-4041-b8f4-bb6de1569ad1 in namespace container-probe-8675 May 1 00:12:51.578: INFO: Started pod liveness-816b8742-5558-4041-b8f4-bb6de1569ad1 in namespace container-probe-8675 STEP: checking the pod's current state and verifying that restartCount is present May 1 00:12:51.580: INFO: Initial restart count of pod liveness-816b8742-5558-4041-b8f4-bb6de1569ad1 is 0 May 1 00:13:11.653: INFO: Restart count of pod container-probe-8675/liveness-816b8742-5558-4041-b8f4-bb6de1569ad1 is now 1 (20.073044954s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:13:11.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8675" for this suite. • [SLOW TEST:38.955 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":290,"completed":95,"skipped":1672,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:13:11.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 1 00:13:11.798: INFO: Waiting up to 5m0s for pod "pod-2ea94290-bd30-4aee-ae07-3738a4d645f3" in namespace "emptydir-1321" to be "Succeeded or Failed" May 1 00:13:11.855: INFO: Pod "pod-2ea94290-bd30-4aee-ae07-3738a4d645f3": Phase="Pending", Reason="", readiness=false. Elapsed: 56.367978ms May 1 00:13:13.959: INFO: Pod "pod-2ea94290-bd30-4aee-ae07-3738a4d645f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160664307s May 1 00:13:16.043: INFO: Pod "pod-2ea94290-bd30-4aee-ae07-3738a4d645f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.244437256s STEP: Saw pod success May 1 00:13:16.043: INFO: Pod "pod-2ea94290-bd30-4aee-ae07-3738a4d645f3" satisfied condition "Succeeded or Failed" May 1 00:13:16.046: INFO: Trying to get logs from node latest-worker pod pod-2ea94290-bd30-4aee-ae07-3738a4d645f3 container test-container: STEP: delete the pod May 1 00:13:16.127: INFO: Waiting for pod pod-2ea94290-bd30-4aee-ae07-3738a4d645f3 to disappear May 1 00:13:16.131: INFO: Pod pod-2ea94290-bd30-4aee-ae07-3738a4d645f3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:13:16.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1321" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":96,"skipped":1678,"failed":0} SSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:13:16.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-g5db8 in namespace proxy-2387 I0501 00:13:16.396918 7 runners.go:190] Created replication controller with name: proxy-service-g5db8, namespace: proxy-2387, replica count: 1 I0501 00:13:17.447361 7 runners.go:190] proxy-service-g5db8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 00:13:18.447616 7 runners.go:190] proxy-service-g5db8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 00:13:19.447848 7 runners.go:190] proxy-service-g5db8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 00:13:20.448073 7 runners.go:190] proxy-service-g5db8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 00:13:21.448288 7 runners.go:190] proxy-service-g5db8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 00:13:22.448518 7 runners.go:190] proxy-service-g5db8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 00:13:23.448738 7 runners.go:190] proxy-service-g5db8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 00:13:24.448952 7 runners.go:190] proxy-service-g5db8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 00:13:25.449328 7 runners.go:190] proxy-service-g5db8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 00:13:26.449574 7 runners.go:190] proxy-service-g5db8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 00:13:27.449799 7 runners.go:190] proxy-service-g5db8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 1 00:13:27.489: INFO: setup took 11.251561705s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 1 00:13:27.535: INFO: (0) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:162/proxy/: bar (200; 45.939311ms) May 1 00:13:27.535: INFO: (0) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:1080/proxy/: test<... (200; 46.025208ms) May 1 00:13:27.535: INFO: (0) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:1080/proxy/: ... (200; 45.903479ms) May 1 00:13:27.537: INFO: (0) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz/proxy/: test (200; 47.964179ms) May 1 00:13:27.539: INFO: (0) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname2/proxy/: bar (200; 49.806149ms) May 1 00:13:27.539: INFO: (0) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:160/proxy/: foo (200; 49.889463ms) May 1 00:13:27.539: INFO: (0) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:160/proxy/: foo (200; 50.070624ms) May 1 00:13:27.540: INFO: (0) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname1/proxy/: foo (200; 50.327105ms) May 1 00:13:27.540: INFO: (0) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname1/proxy/: foo (200; 50.457039ms) May 1 00:13:27.540: INFO: (0) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname2/proxy/: bar (200; 50.527929ms) May 1 00:13:27.540: INFO: (0) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:162/proxy/: bar (200; 51.077842ms) May 1 00:13:27.542: INFO: (0) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:462/proxy/: tls qux (200; 52.718067ms) May 1 00:13:27.542: INFO: (0) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname2/proxy/: tls qux (200; 52.865874ms) May 1 00:13:27.545: INFO: (0) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: ... (200; 3.314669ms) May 1 00:13:27.551: INFO: (1) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:1080/proxy/: test<... (200; 3.518465ms) May 1 00:13:27.551: INFO: (1) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:462/proxy/: tls qux (200; 3.731096ms) May 1 00:13:27.552: INFO: (1) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz/proxy/: test (200; 3.761485ms) May 1 00:13:27.552: INFO: (1) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:160/proxy/: foo (200; 3.923086ms) May 1 00:13:27.552: INFO: (1) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:162/proxy/: bar (200; 4.076784ms) May 1 00:13:27.552: INFO: (1) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname2/proxy/: tls qux (200; 4.091582ms) May 1 00:13:27.552: INFO: (1) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:460/proxy/: tls baz (200; 4.293408ms) May 1 00:13:27.552: INFO: (1) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:162/proxy/: bar (200; 4.378969ms) May 1 00:13:27.552: INFO: (1) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname1/proxy/: foo (200; 4.48149ms) May 1 00:13:27.552: INFO: (1) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:160/proxy/: foo (200; 4.387017ms) May 1 00:13:27.552: INFO: (1) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname1/proxy/: tls baz (200; 4.442462ms) May 1 00:13:27.553: INFO: (1) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname1/proxy/: foo (200; 5.132647ms) May 1 00:13:27.553: INFO: (1) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname2/proxy/: bar (200; 5.211143ms) May 1 00:13:27.553: INFO: (1) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname2/proxy/: bar (200; 5.323869ms) May 1 00:13:27.553: INFO: (1) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: ... (200; 2.956965ms) May 1 00:13:27.556: INFO: (2) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:160/proxy/: foo (200; 2.877477ms) May 1 00:13:27.556: INFO: (2) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:162/proxy/: bar (200; 2.984655ms) May 1 00:13:27.557: INFO: (2) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz/proxy/: test (200; 3.415899ms) May 1 00:13:27.557: INFO: (2) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:460/proxy/: tls baz (200; 3.426683ms) May 1 00:13:27.557: INFO: (2) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:1080/proxy/: test<... (200; 3.564999ms) May 1 00:13:27.557: INFO: (2) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:160/proxy/: foo (200; 3.450999ms) May 1 00:13:27.557: INFO: (2) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:162/proxy/: bar (200; 3.539504ms) May 1 00:13:27.559: INFO: (2) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname1/proxy/: foo (200; 5.138985ms) May 1 00:13:27.559: INFO: (2) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname2/proxy/: bar (200; 5.29752ms) May 1 00:13:27.559: INFO: (2) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname2/proxy/: bar (200; 5.277656ms) May 1 00:13:27.559: INFO: (2) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:462/proxy/: tls qux (200; 5.453296ms) May 1 00:13:27.559: INFO: (2) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname1/proxy/: tls baz (200; 5.261865ms) May 1 00:13:27.559: INFO: (2) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname2/proxy/: tls qux (200; 5.327254ms) May 1 00:13:27.559: INFO: (2) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname1/proxy/: foo (200; 5.343515ms) May 1 00:13:27.563: INFO: (3) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:462/proxy/: tls qux (200; 3.133643ms) May 1 00:13:27.563: INFO: (3) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:1080/proxy/: test<... (200; 3.550308ms) May 1 00:13:27.563: INFO: (3) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: ... (200; 3.699629ms) May 1 00:13:27.563: INFO: (3) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname2/proxy/: tls qux (200; 3.884548ms) May 1 00:13:27.563: INFO: (3) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname2/proxy/: bar (200; 4.260542ms) May 1 00:13:27.563: INFO: (3) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:460/proxy/: tls baz (200; 3.699578ms) May 1 00:13:27.563: INFO: (3) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:162/proxy/: bar (200; 3.895075ms) May 1 00:13:27.563: INFO: (3) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:160/proxy/: foo (200; 4.183483ms) May 1 00:13:27.563: INFO: (3) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz/proxy/: test (200; 4.389679ms) May 1 00:13:27.563: INFO: (3) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:160/proxy/: foo (200; 4.459035ms) May 1 00:13:27.566: INFO: (4) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: test (200; 5.939159ms) May 1 00:13:27.570: INFO: (4) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:1080/proxy/: test<... (200; 5.91849ms) May 1 00:13:27.570: INFO: (4) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:162/proxy/: bar (200; 6.43178ms) May 1 00:13:27.570: INFO: (4) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:160/proxy/: foo (200; 6.468287ms) May 1 00:13:27.570: INFO: (4) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:1080/proxy/: ... (200; 6.472867ms) May 1 00:13:27.570: INFO: (4) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname1/proxy/: foo (200; 6.520502ms) May 1 00:13:27.570: INFO: (4) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname2/proxy/: bar (200; 6.533571ms) May 1 00:13:27.570: INFO: (4) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname1/proxy/: tls baz (200; 6.8306ms) May 1 00:13:27.570: INFO: (4) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:162/proxy/: bar (200; 6.79434ms) May 1 00:13:27.571: INFO: (4) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname1/proxy/: foo (200; 6.945967ms) May 1 00:13:27.571: INFO: (4) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname2/proxy/: bar (200; 6.944648ms) May 1 00:13:27.571: INFO: (4) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname2/proxy/: tls qux (200; 7.014221ms) May 1 00:13:27.571: INFO: (4) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:160/proxy/: foo (200; 7.039941ms) May 1 00:13:27.571: INFO: (4) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:462/proxy/: tls qux (200; 6.978604ms) May 1 00:13:27.573: INFO: (5) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: ... (200; 4.662226ms) May 1 00:13:27.575: INFO: (5) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz/proxy/: test (200; 4.740607ms) May 1 00:13:27.576: INFO: (5) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:460/proxy/: tls baz (200; 4.839059ms) May 1 00:13:27.576: INFO: (5) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname2/proxy/: bar (200; 4.836981ms) May 1 00:13:27.576: INFO: (5) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:162/proxy/: bar (200; 5.038622ms) May 1 00:13:27.576: INFO: (5) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:1080/proxy/: test<... (200; 5.156948ms) May 1 00:13:27.576: INFO: (5) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:462/proxy/: tls qux (200; 5.239518ms) May 1 00:13:27.578: INFO: (6) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz/proxy/: test (200; 1.691398ms) May 1 00:13:27.579: INFO: (6) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:160/proxy/: foo (200; 3.231404ms) May 1 00:13:27.580: INFO: (6) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname2/proxy/: tls qux (200; 3.817373ms) May 1 00:13:27.580: INFO: (6) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:160/proxy/: foo (200; 3.760123ms) May 1 00:13:27.580: INFO: (6) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:1080/proxy/: ... (200; 3.861407ms) May 1 00:13:27.580: INFO: (6) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname2/proxy/: bar (200; 3.820202ms) May 1 00:13:27.580: INFO: (6) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:462/proxy/: tls qux (200; 3.857467ms) May 1 00:13:27.580: INFO: (6) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:162/proxy/: bar (200; 4.229195ms) May 1 00:13:27.580: INFO: (6) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:162/proxy/: bar (200; 4.207088ms) May 1 00:13:27.580: INFO: (6) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:1080/proxy/: test<... (200; 4.208928ms) May 1 00:13:27.580: INFO: (6) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname1/proxy/: foo (200; 4.149057ms) May 1 00:13:27.580: INFO: (6) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:460/proxy/: tls baz (200; 4.215349ms) May 1 00:13:27.581: INFO: (6) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: test (200; 3.598762ms) May 1 00:13:27.585: INFO: (7) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname2/proxy/: bar (200; 3.867783ms) May 1 00:13:27.586: INFO: (7) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:160/proxy/: foo (200; 4.057799ms) May 1 00:13:27.586: INFO: (7) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname1/proxy/: foo (200; 4.065732ms) May 1 00:13:27.586: INFO: (7) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:1080/proxy/: test<... (200; 4.262183ms) May 1 00:13:27.586: INFO: (7) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname1/proxy/: foo (200; 4.250257ms) May 1 00:13:27.586: INFO: (7) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:162/proxy/: bar (200; 4.258347ms) May 1 00:13:27.586: INFO: (7) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname2/proxy/: bar (200; 4.354358ms) May 1 00:13:27.586: INFO: (7) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname1/proxy/: tls baz (200; 4.413717ms) May 1 00:13:27.586: INFO: (7) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: ... (200; 4.638254ms) May 1 00:13:27.586: INFO: (7) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:462/proxy/: tls qux (200; 4.691964ms) May 1 00:13:27.586: INFO: (7) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:162/proxy/: bar (200; 4.795003ms) May 1 00:13:27.586: INFO: (7) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:160/proxy/: foo (200; 4.723312ms) May 1 00:13:27.586: INFO: (7) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:460/proxy/: tls baz (200; 4.753739ms) May 1 00:13:27.586: INFO: (7) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname2/proxy/: tls qux (200; 4.872357ms) May 1 00:13:27.594: INFO: (8) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:462/proxy/: tls qux (200; 7.301185ms) May 1 00:13:27.594: INFO: (8) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:460/proxy/: tls baz (200; 7.382901ms) May 1 00:13:27.594: INFO: (8) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:160/proxy/: foo (200; 7.356708ms) May 1 00:13:27.594: INFO: (8) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: test (200; 7.549069ms) May 1 00:13:27.594: INFO: (8) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:162/proxy/: bar (200; 7.576179ms) May 1 00:13:27.594: INFO: (8) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname1/proxy/: tls baz (200; 7.552954ms) May 1 00:13:27.594: INFO: (8) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:1080/proxy/: ... (200; 7.529457ms) May 1 00:13:27.594: INFO: (8) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:1080/proxy/: test<... (200; 7.52641ms) May 1 00:13:27.594: INFO: (8) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:162/proxy/: bar (200; 7.541809ms) May 1 00:13:27.595: INFO: (8) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname1/proxy/: foo (200; 8.303579ms) May 1 00:13:27.595: INFO: (8) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname2/proxy/: bar (200; 8.314294ms) May 1 00:13:27.595: INFO: (8) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname2/proxy/: bar (200; 8.333977ms) May 1 00:13:27.595: INFO: (8) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname2/proxy/: tls qux (200; 8.425744ms) May 1 00:13:27.595: INFO: (8) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname1/proxy/: foo (200; 8.447787ms) May 1 00:13:27.599: INFO: (9) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz/proxy/: test (200; 4.342303ms) May 1 00:13:27.599: INFO: (9) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:1080/proxy/: ... (200; 4.458382ms) May 1 00:13:27.600: INFO: (9) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:1080/proxy/: test<... (200; 4.573393ms) May 1 00:13:27.600: INFO: (9) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:160/proxy/: foo (200; 4.585818ms) May 1 00:13:27.600: INFO: (9) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:162/proxy/: bar (200; 4.598195ms) May 1 00:13:27.600: INFO: (9) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: ... (200; 3.464319ms) May 1 00:13:27.604: INFO: (10) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz/proxy/: test (200; 3.499836ms) May 1 00:13:27.604: INFO: (10) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:162/proxy/: bar (200; 3.524823ms) May 1 00:13:27.604: INFO: (10) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:160/proxy/: foo (200; 3.500539ms) May 1 00:13:27.604: INFO: (10) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:462/proxy/: tls qux (200; 3.613135ms) May 1 00:13:27.604: INFO: (10) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname1/proxy/: foo (200; 3.608273ms) May 1 00:13:27.604: INFO: (10) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:460/proxy/: tls baz (200; 3.551338ms) May 1 00:13:27.604: INFO: (10) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname2/proxy/: bar (200; 3.661688ms) May 1 00:13:27.604: INFO: (10) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:1080/proxy/: test<... (200; 3.876954ms) May 1 00:13:27.604: INFO: (10) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:160/proxy/: foo (200; 3.950846ms) May 1 00:13:27.605: INFO: (10) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:162/proxy/: bar (200; 4.14326ms) May 1 00:13:27.605: INFO: (10) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname1/proxy/: foo (200; 4.305954ms) May 1 00:13:27.605: INFO: (10) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: ... (200; 4.558313ms) May 1 00:13:27.610: INFO: (11) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname1/proxy/: foo (200; 4.566311ms) May 1 00:13:27.611: INFO: (11) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:160/proxy/: foo (200; 5.326819ms) May 1 00:13:27.611: INFO: (11) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:162/proxy/: bar (200; 5.250386ms) May 1 00:13:27.611: INFO: (11) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: test (200; 5.514506ms) May 1 00:13:27.611: INFO: (11) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:162/proxy/: bar (200; 5.464201ms) May 1 00:13:27.611: INFO: (11) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:160/proxy/: foo (200; 5.575589ms) May 1 00:13:27.611: INFO: (11) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:460/proxy/: tls baz (200; 5.560944ms) May 1 00:13:27.611: INFO: (11) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:1080/proxy/: test<... (200; 5.582041ms) May 1 00:13:27.611: INFO: (11) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:462/proxy/: tls qux (200; 5.545676ms) May 1 00:13:27.614: INFO: (12) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:160/proxy/: foo (200; 3.404751ms) May 1 00:13:27.614: INFO: (12) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:160/proxy/: foo (200; 3.392974ms) May 1 00:13:27.614: INFO: (12) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:162/proxy/: bar (200; 3.426275ms) May 1 00:13:27.614: INFO: (12) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:462/proxy/: tls qux (200; 3.56421ms) May 1 00:13:27.615: INFO: (12) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname2/proxy/: bar (200; 3.752ms) May 1 00:13:27.615: INFO: (12) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz/proxy/: test (200; 4.195087ms) May 1 00:13:27.615: INFO: (12) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname1/proxy/: foo (200; 4.192437ms) May 1 00:13:27.615: INFO: (12) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:1080/proxy/: ... (200; 4.227078ms) May 1 00:13:27.615: INFO: (12) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: test<... (200; 4.238879ms) May 1 00:13:27.615: INFO: (12) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:162/proxy/: bar (200; 4.294562ms) May 1 00:13:27.615: INFO: (12) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname2/proxy/: bar (200; 4.235723ms) May 1 00:13:27.615: INFO: (12) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname2/proxy/: tls qux (200; 4.305574ms) May 1 00:13:27.615: INFO: (12) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:460/proxy/: tls baz (200; 4.268174ms) May 1 00:13:27.615: INFO: (12) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname1/proxy/: tls baz (200; 4.241629ms) May 1 00:13:27.618: INFO: (13) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:1080/proxy/: ... (200; 3.075854ms) May 1 00:13:27.618: INFO: (13) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: test<... (200; 3.144793ms) May 1 00:13:27.618: INFO: (13) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:160/proxy/: foo (200; 3.265894ms) May 1 00:13:27.619: INFO: (13) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz/proxy/: test (200; 3.243653ms) May 1 00:13:27.619: INFO: (13) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:162/proxy/: bar (200; 3.785885ms) May 1 00:13:27.619: INFO: (13) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname1/proxy/: foo (200; 3.85797ms) May 1 00:13:27.620: INFO: (13) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:162/proxy/: bar (200; 4.392307ms) May 1 00:13:27.620: INFO: (13) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:462/proxy/: tls qux (200; 4.592015ms) May 1 00:13:27.620: INFO: (13) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname2/proxy/: bar (200; 4.382413ms) May 1 00:13:27.620: INFO: (13) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:460/proxy/: tls baz (200; 4.660276ms) May 1 00:13:27.620: INFO: (13) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname1/proxy/: foo (200; 4.714885ms) May 1 00:13:27.620: INFO: (13) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:160/proxy/: foo (200; 4.689587ms) May 1 00:13:27.620: INFO: (13) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname2/proxy/: bar (200; 4.958904ms) May 1 00:13:27.620: INFO: (13) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname1/proxy/: tls baz (200; 5.013821ms) May 1 00:13:27.620: INFO: (13) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname2/proxy/: tls qux (200; 5.062031ms) May 1 00:13:27.624: INFO: (14) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:162/proxy/: bar (200; 3.002723ms) May 1 00:13:27.624: INFO: (14) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:160/proxy/: foo (200; 3.177401ms) May 1 00:13:27.624: INFO: (14) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:460/proxy/: tls baz (200; 3.177114ms) May 1 00:13:27.624: INFO: (14) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:160/proxy/: foo (200; 2.964834ms) May 1 00:13:27.624: INFO: (14) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:462/proxy/: tls qux (200; 3.397515ms) May 1 00:13:27.624: INFO: (14) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: ... (200; 3.86065ms) May 1 00:13:27.625: INFO: (14) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname2/proxy/: bar (200; 4.936397ms) May 1 00:13:27.625: INFO: (14) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname2/proxy/: tls qux (200; 5.020493ms) May 1 00:13:27.626: INFO: (14) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname1/proxy/: foo (200; 5.11501ms) May 1 00:13:27.626: INFO: (14) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname1/proxy/: foo (200; 5.129029ms) May 1 00:13:27.626: INFO: (14) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:162/proxy/: bar (200; 5.224989ms) May 1 00:13:27.626: INFO: (14) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz/proxy/: test (200; 5.242331ms) May 1 00:13:27.626: INFO: (14) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname1/proxy/: tls baz (200; 5.292927ms) May 1 00:13:27.626: INFO: (14) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname2/proxy/: bar (200; 5.341475ms) May 1 00:13:27.626: INFO: (14) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:1080/proxy/: test<... (200; 5.396552ms) May 1 00:13:27.628: INFO: (15) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:162/proxy/: bar (200; 1.93897ms) May 1 00:13:27.630: INFO: (15) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname1/proxy/: foo (200; 4.182534ms) May 1 00:13:27.630: INFO: (15) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname2/proxy/: bar (200; 3.344236ms) May 1 00:13:27.630: INFO: (15) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname2/proxy/: tls qux (200; 3.950214ms) May 1 00:13:27.630: INFO: (15) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname1/proxy/: foo (200; 4.051922ms) May 1 00:13:27.630: INFO: (15) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:1080/proxy/: test<... (200; 3.221779ms) May 1 00:13:27.630: INFO: (15) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname2/proxy/: bar (200; 3.335686ms) May 1 00:13:27.630: INFO: (15) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:462/proxy/: tls qux (200; 3.5204ms) May 1 00:13:27.630: INFO: (15) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:160/proxy/: foo (200; 4.196101ms) May 1 00:13:27.630: INFO: (15) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname1/proxy/: tls baz (200; 4.329022ms) May 1 00:13:27.630: INFO: (15) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:1080/proxy/: ... (200; 3.220493ms) May 1 00:13:27.630: INFO: (15) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:460/proxy/: tls baz (200; 3.695366ms) May 1 00:13:27.631: INFO: (15) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:160/proxy/: foo (200; 4.136195ms) May 1 00:13:27.631: INFO: (15) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz/proxy/: test (200; 4.496547ms) May 1 00:13:27.631: INFO: (15) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: test (200; 1.953222ms) May 1 00:13:27.634: INFO: (16) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname1/proxy/: foo (200; 3.348564ms) May 1 00:13:27.634: INFO: (16) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname2/proxy/: tls qux (200; 3.42401ms) May 1 00:13:27.635: INFO: (16) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:462/proxy/: tls qux (200; 4.136206ms) May 1 00:13:27.635: INFO: (16) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:160/proxy/: foo (200; 4.23419ms) May 1 00:13:27.635: INFO: (16) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname2/proxy/: bar (200; 4.163072ms) May 1 00:13:27.635: INFO: (16) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname1/proxy/: foo (200; 4.194819ms) May 1 00:13:27.635: INFO: (16) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:1080/proxy/: test<... (200; 4.264879ms) May 1 00:13:27.635: INFO: (16) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: ... (200; 4.1946ms) May 1 00:13:27.635: INFO: (16) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:460/proxy/: tls baz (200; 4.256002ms) May 1 00:13:27.635: INFO: (16) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname1/proxy/: tls baz (200; 4.283649ms) May 1 00:13:27.635: INFO: (16) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:160/proxy/: foo (200; 4.2651ms) May 1 00:13:27.635: INFO: (16) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:162/proxy/: bar (200; 4.272086ms) May 1 00:13:27.635: INFO: (16) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:162/proxy/: bar (200; 4.334746ms) May 1 00:13:27.635: INFO: (16) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname2/proxy/: bar (200; 4.305598ms) May 1 00:13:27.639: INFO: (17) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz/proxy/: test (200; 4.102299ms) May 1 00:13:27.639: INFO: (17) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:160/proxy/: foo (200; 4.316527ms) May 1 00:13:27.640: INFO: (17) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:1080/proxy/: test<... (200; 4.501659ms) May 1 00:13:27.640: INFO: (17) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:162/proxy/: bar (200; 4.529213ms) May 1 00:13:27.640: INFO: (17) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname2/proxy/: bar (200; 4.869043ms) May 1 00:13:27.640: INFO: (17) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:162/proxy/: bar (200; 4.766477ms) May 1 00:13:27.640: INFO: (17) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:1080/proxy/: ... (200; 4.753884ms) May 1 00:13:27.640: INFO: (17) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:462/proxy/: tls qux (200; 4.783359ms) May 1 00:13:27.640: INFO: (17) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: test<... (200; 2.67164ms) May 1 00:13:27.644: INFO: (18) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:1080/proxy/: ... (200; 2.655437ms) May 1 00:13:27.644: INFO: (18) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:162/proxy/: bar (200; 2.90436ms) May 1 00:13:27.644: INFO: (18) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:460/proxy/: tls baz (200; 2.914644ms) May 1 00:13:27.644: INFO: (18) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: test (200; 3.933125ms) May 1 00:13:27.645: INFO: (18) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname1/proxy/: foo (200; 4.223339ms) May 1 00:13:27.645: INFO: (18) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname2/proxy/: tls qux (200; 4.12697ms) May 1 00:13:27.645: INFO: (18) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname1/proxy/: tls baz (200; 4.192439ms) May 1 00:13:27.648: INFO: (19) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:460/proxy/: tls baz (200; 2.850735ms) May 1 00:13:27.648: INFO: (19) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:462/proxy/: tls qux (200; 2.875494ms) May 1 00:13:27.649: INFO: (19) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:162/proxy/: bar (200; 3.017127ms) May 1 00:13:27.649: INFO: (19) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:160/proxy/: foo (200; 3.058176ms) May 1 00:13:27.649: INFO: (19) /api/v1/namespaces/proxy-2387/pods/https:proxy-service-g5db8-npbcz:443/proxy/: test (200; 4.085149ms) May 1 00:13:27.650: INFO: (19) /api/v1/namespaces/proxy-2387/pods/proxy-service-g5db8-npbcz:1080/proxy/: test<... (200; 4.149481ms) May 1 00:13:27.650: INFO: (19) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:160/proxy/: foo (200; 4.200158ms) May 1 00:13:27.653: INFO: (19) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:162/proxy/: bar (200; 7.55483ms) May 1 00:13:27.654: INFO: (19) /api/v1/namespaces/proxy-2387/pods/http:proxy-service-g5db8-npbcz:1080/proxy/: ... (200; 8.63672ms) May 1 00:13:27.655: INFO: (19) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname2/proxy/: bar (200; 9.689932ms) May 1 00:13:27.655: INFO: (19) /api/v1/namespaces/proxy-2387/services/http:proxy-service-g5db8:portname1/proxy/: foo (200; 9.753874ms) May 1 00:13:27.655: INFO: (19) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname2/proxy/: bar (200; 9.816145ms) May 1 00:13:27.655: INFO: (19) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname1/proxy/: tls baz (200; 9.730157ms) May 1 00:13:27.655: INFO: (19) /api/v1/namespaces/proxy-2387/services/https:proxy-service-g5db8:tlsportname2/proxy/: tls qux (200; 9.764979ms) May 1 00:13:27.657: INFO: (19) /api/v1/namespaces/proxy-2387/services/proxy-service-g5db8:portname1/proxy/: foo (200; 11.661797ms) STEP: deleting ReplicationController proxy-service-g5db8 in namespace proxy-2387, will wait for the garbage collector to delete the pods May 1 00:13:27.715: INFO: Deleting ReplicationController proxy-service-g5db8 took: 5.99191ms May 1 00:13:28.015: INFO: Terminating ReplicationController proxy-service-g5db8 pods took: 300.165143ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:13:44.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2387" for this suite. • [SLOW TEST:28.806 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":290,"completed":97,"skipped":1682,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:13:44.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-18 May 1 00:14:03.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-18 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 1 00:14:06.170: INFO: stderr: "I0501 00:14:06.078244 926 log.go:172] (0xc00003a0b0) (0xc00062cd20) Create stream\nI0501 00:14:06.078302 926 log.go:172] (0xc00003a0b0) (0xc00062cd20) Stream added, broadcasting: 1\nI0501 00:14:06.081701 926 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0501 00:14:06.081740 926 log.go:172] (0xc00003a0b0) (0xc00062dcc0) Create stream\nI0501 00:14:06.081754 926 log.go:172] (0xc00003a0b0) (0xc00062dcc0) Stream added, broadcasting: 3\nI0501 00:14:06.082489 926 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0501 00:14:06.082523 926 log.go:172] (0xc00003a0b0) (0xc0005ce5a0) Create stream\nI0501 00:14:06.082534 926 log.go:172] (0xc00003a0b0) (0xc0005ce5a0) Stream added, broadcasting: 5\nI0501 00:14:06.083307 926 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0501 00:14:06.158773 926 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0501 00:14:06.158800 926 log.go:172] (0xc0005ce5a0) (5) Data frame handling\nI0501 00:14:06.158817 926 log.go:172] (0xc0005ce5a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0501 00:14:06.164826 926 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0501 00:14:06.164857 926 log.go:172] (0xc00062dcc0) (3) Data frame handling\nI0501 00:14:06.164876 926 log.go:172] (0xc00062dcc0) (3) Data frame sent\nI0501 00:14:06.165068 926 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0501 00:14:06.165080 926 log.go:172] (0xc0005ce5a0) (5) Data frame handling\nI0501 00:14:06.165421 926 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0501 00:14:06.165431 926 log.go:172] (0xc00062dcc0) (3) Data frame handling\nI0501 00:14:06.166724 926 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0501 00:14:06.166733 926 log.go:172] (0xc00062cd20) (1) Data frame handling\nI0501 00:14:06.166739 926 log.go:172] (0xc00062cd20) (1) Data frame sent\nI0501 00:14:06.166746 926 log.go:172] (0xc00003a0b0) (0xc00062cd20) Stream removed, broadcasting: 1\nI0501 00:14:06.166931 926 log.go:172] (0xc00003a0b0) (0xc00062cd20) Stream removed, broadcasting: 1\nI0501 00:14:06.166950 926 log.go:172] (0xc00003a0b0) Go away received\nI0501 00:14:06.166975 926 log.go:172] (0xc00003a0b0) (0xc00062dcc0) Stream removed, broadcasting: 3\nI0501 00:14:06.166983 926 log.go:172] (0xc00003a0b0) (0xc0005ce5a0) Stream removed, broadcasting: 5\n" May 1 00:14:06.170: INFO: stdout: "iptables" May 1 00:14:06.170: INFO: proxyMode: iptables May 1 00:14:06.174: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 1 00:14:06.195: INFO: Pod kube-proxy-mode-detector still exists May 1 00:14:08.196: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 1 00:14:08.224: INFO: Pod kube-proxy-mode-detector still exists May 1 00:14:10.196: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 1 00:14:10.221: INFO: Pod kube-proxy-mode-detector still exists May 1 00:14:12.196: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 1 00:14:12.200: INFO: Pod kube-proxy-mode-detector still exists May 1 00:14:14.196: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 1 00:14:14.199: INFO: Pod kube-proxy-mode-detector still exists May 1 00:14:16.196: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 1 00:14:16.198: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-18 STEP: creating replication controller affinity-nodeport-timeout in namespace services-18 I0501 00:14:16.331032 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-18, replica count: 3 I0501 00:14:19.381496 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 00:14:22.381761 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 1 00:14:22.392: INFO: Creating new exec pod May 1 00:14:27.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-18 execpod-affinity6x9w9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 1 00:14:27.658: INFO: stderr: "I0501 00:14:27.575941 960 log.go:172] (0xc00003ad10) (0xc0005f8140) Create stream\nI0501 00:14:27.575995 960 log.go:172] (0xc00003ad10) (0xc0005f8140) Stream added, broadcasting: 1\nI0501 00:14:27.578130 960 log.go:172] (0xc00003ad10) Reply frame received for 1\nI0501 00:14:27.578175 960 log.go:172] (0xc00003ad10) (0xc000542140) Create stream\nI0501 00:14:27.578189 960 log.go:172] (0xc00003ad10) (0xc000542140) Stream added, broadcasting: 3\nI0501 00:14:27.579115 960 log.go:172] (0xc00003ad10) Reply frame received for 3\nI0501 00:14:27.579137 960 log.go:172] (0xc00003ad10) (0xc0005f86e0) Create stream\nI0501 00:14:27.579144 960 log.go:172] (0xc00003ad10) (0xc0005f86e0) Stream added, broadcasting: 5\nI0501 00:14:27.580203 960 log.go:172] (0xc00003ad10) Reply frame received for 5\nI0501 00:14:27.651567 960 log.go:172] (0xc00003ad10) Data frame received for 5\nI0501 00:14:27.651589 960 log.go:172] (0xc0005f86e0) (5) Data frame handling\nI0501 00:14:27.651599 960 log.go:172] (0xc0005f86e0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0501 00:14:27.652144 960 log.go:172] (0xc00003ad10) Data frame received for 5\nI0501 00:14:27.652183 960 log.go:172] (0xc0005f86e0) (5) Data frame handling\nI0501 00:14:27.652219 960 log.go:172] (0xc0005f86e0) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0501 00:14:27.652269 960 log.go:172] (0xc00003ad10) Data frame received for 3\nI0501 00:14:27.652296 960 log.go:172] (0xc00003ad10) Data frame received for 5\nI0501 00:14:27.652331 960 log.go:172] (0xc0005f86e0) (5) Data frame handling\nI0501 00:14:27.652352 960 log.go:172] (0xc000542140) (3) Data frame handling\nI0501 00:14:27.653764 960 log.go:172] (0xc00003ad10) Data frame received for 1\nI0501 00:14:27.653779 960 log.go:172] (0xc0005f8140) (1) Data frame handling\nI0501 00:14:27.653790 960 log.go:172] (0xc0005f8140) (1) Data frame sent\nI0501 00:14:27.653804 960 log.go:172] (0xc00003ad10) (0xc0005f8140) Stream removed, broadcasting: 1\nI0501 00:14:27.653835 960 log.go:172] (0xc00003ad10) Go away received\nI0501 00:14:27.654077 960 log.go:172] (0xc00003ad10) (0xc0005f8140) Stream removed, broadcasting: 1\nI0501 00:14:27.654097 960 log.go:172] (0xc00003ad10) (0xc000542140) Stream removed, broadcasting: 3\nI0501 00:14:27.654110 960 log.go:172] (0xc00003ad10) (0xc0005f86e0) Stream removed, broadcasting: 5\n" May 1 00:14:27.658: INFO: stdout: "" May 1 00:14:27.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-18 execpod-affinity6x9w9 -- /bin/sh -x -c nc -zv -t -w 2 10.107.70.194 80' May 1 00:14:27.878: INFO: stderr: "I0501 00:14:27.787393 981 log.go:172] (0xc00051afd0) (0xc0009e8140) Create stream\nI0501 00:14:27.787440 981 log.go:172] (0xc00051afd0) (0xc0009e8140) Stream added, broadcasting: 1\nI0501 00:14:27.793331 981 log.go:172] (0xc00051afd0) Reply frame received for 1\nI0501 00:14:27.793384 981 log.go:172] (0xc00051afd0) (0xc000654e60) Create stream\nI0501 00:14:27.793400 981 log.go:172] (0xc00051afd0) (0xc000654e60) Stream added, broadcasting: 3\nI0501 00:14:27.794830 981 log.go:172] (0xc00051afd0) Reply frame received for 3\nI0501 00:14:27.794871 981 log.go:172] (0xc00051afd0) (0xc0005eea00) Create stream\nI0501 00:14:27.794886 981 log.go:172] (0xc00051afd0) (0xc0005eea00) Stream added, broadcasting: 5\nI0501 00:14:27.796221 981 log.go:172] (0xc00051afd0) Reply frame received for 5\nI0501 00:14:27.871597 981 log.go:172] (0xc00051afd0) Data frame received for 5\nI0501 00:14:27.871657 981 log.go:172] (0xc0005eea00) (5) Data frame handling\nI0501 00:14:27.871677 981 log.go:172] (0xc0005eea00) (5) Data frame sent\nI0501 00:14:27.871691 981 log.go:172] (0xc00051afd0) Data frame received for 5\nI0501 00:14:27.871702 981 log.go:172] (0xc0005eea00) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.70.194 80\nConnection to 10.107.70.194 80 port [tcp/http] succeeded!\nI0501 00:14:27.871731 981 log.go:172] (0xc00051afd0) Data frame received for 3\nI0501 00:14:27.871745 981 log.go:172] (0xc000654e60) (3) Data frame handling\nI0501 00:14:27.873226 981 log.go:172] (0xc00051afd0) Data frame received for 1\nI0501 00:14:27.873277 981 log.go:172] (0xc0009e8140) (1) Data frame handling\nI0501 00:14:27.873283 981 log.go:172] (0xc0009e8140) (1) Data frame sent\nI0501 00:14:27.873292 981 log.go:172] (0xc00051afd0) (0xc0009e8140) Stream removed, broadcasting: 1\nI0501 00:14:27.873537 981 log.go:172] (0xc00051afd0) Go away received\nI0501 00:14:27.873592 981 log.go:172] (0xc00051afd0) (0xc0009e8140) Stream removed, broadcasting: 1\nI0501 00:14:27.873625 981 log.go:172] (0xc00051afd0) (0xc000654e60) Stream removed, broadcasting: 3\nI0501 00:14:27.873655 981 log.go:172] (0xc00051afd0) (0xc0005eea00) Stream removed, broadcasting: 5\n" May 1 00:14:27.878: INFO: stdout: "" May 1 00:14:27.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-18 execpod-affinity6x9w9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31110' May 1 00:14:28.094: INFO: stderr: "I0501 00:14:28.014485 1000 log.go:172] (0xc0000e9d90) (0xc000b605a0) Create stream\nI0501 00:14:28.014541 1000 log.go:172] (0xc0000e9d90) (0xc000b605a0) Stream added, broadcasting: 1\nI0501 00:14:28.018591 1000 log.go:172] (0xc0000e9d90) Reply frame received for 1\nI0501 00:14:28.018641 1000 log.go:172] (0xc0000e9d90) (0xc0006e4dc0) Create stream\nI0501 00:14:28.018673 1000 log.go:172] (0xc0000e9d90) (0xc0006e4dc0) Stream added, broadcasting: 3\nI0501 00:14:28.019796 1000 log.go:172] (0xc0000e9d90) Reply frame received for 3\nI0501 00:14:28.019878 1000 log.go:172] (0xc0000e9d90) (0xc000660640) Create stream\nI0501 00:14:28.019910 1000 log.go:172] (0xc0000e9d90) (0xc000660640) Stream added, broadcasting: 5\nI0501 00:14:28.020754 1000 log.go:172] (0xc0000e9d90) Reply frame received for 5\nI0501 00:14:28.084269 1000 log.go:172] (0xc0000e9d90) Data frame received for 5\nI0501 00:14:28.084292 1000 log.go:172] (0xc000660640) (5) Data frame handling\nI0501 00:14:28.084310 1000 log.go:172] (0xc000660640) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 31110\nI0501 00:14:28.084953 1000 log.go:172] (0xc0000e9d90) Data frame received for 5\nI0501 00:14:28.084998 1000 log.go:172] (0xc000660640) (5) Data frame handling\nI0501 00:14:28.085020 1000 log.go:172] (0xc000660640) (5) Data frame sent\nConnection to 172.17.0.13 31110 port [tcp/31110] succeeded!\nI0501 00:14:28.085988 1000 log.go:172] (0xc0000e9d90) Data frame received for 5\nI0501 00:14:28.086007 1000 log.go:172] (0xc000660640) (5) Data frame handling\nI0501 00:14:28.086781 1000 log.go:172] (0xc0000e9d90) Data frame received for 3\nI0501 00:14:28.086802 1000 log.go:172] (0xc0006e4dc0) (3) Data frame handling\nI0501 00:14:28.090107 1000 log.go:172] (0xc0000e9d90) Data frame received for 1\nI0501 00:14:28.090122 1000 log.go:172] (0xc000b605a0) (1) Data frame handling\nI0501 00:14:28.090128 1000 log.go:172] (0xc000b605a0) (1) Data frame sent\nI0501 00:14:28.090141 1000 log.go:172] (0xc0000e9d90) (0xc000b605a0) Stream removed, broadcasting: 1\nI0501 00:14:28.090307 1000 log.go:172] (0xc0000e9d90) Go away received\nI0501 00:14:28.090399 1000 log.go:172] (0xc0000e9d90) (0xc000b605a0) Stream removed, broadcasting: 1\nI0501 00:14:28.090413 1000 log.go:172] (0xc0000e9d90) (0xc0006e4dc0) Stream removed, broadcasting: 3\nI0501 00:14:28.090419 1000 log.go:172] (0xc0000e9d90) (0xc000660640) Stream removed, broadcasting: 5\n" May 1 00:14:28.094: INFO: stdout: "" May 1 00:14:28.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-18 execpod-affinity6x9w9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31110' May 1 00:14:28.281: INFO: stderr: "I0501 00:14:28.212824 1022 log.go:172] (0xc000a9d3f0) (0xc000a685a0) Create stream\nI0501 00:14:28.212888 1022 log.go:172] (0xc000a9d3f0) (0xc000a685a0) Stream added, broadcasting: 1\nI0501 00:14:28.215797 1022 log.go:172] (0xc000a9d3f0) Reply frame received for 1\nI0501 00:14:28.215837 1022 log.go:172] (0xc000a9d3f0) (0xc000624f00) Create stream\nI0501 00:14:28.215866 1022 log.go:172] (0xc000a9d3f0) (0xc000624f00) Stream added, broadcasting: 3\nI0501 00:14:28.216695 1022 log.go:172] (0xc000a9d3f0) Reply frame received for 3\nI0501 00:14:28.216720 1022 log.go:172] (0xc000a9d3f0) (0xc000625220) Create stream\nI0501 00:14:28.216727 1022 log.go:172] (0xc000a9d3f0) (0xc000625220) Stream added, broadcasting: 5\nI0501 00:14:28.217549 1022 log.go:172] (0xc000a9d3f0) Reply frame received for 5\nI0501 00:14:28.274895 1022 log.go:172] (0xc000a9d3f0) Data frame received for 3\nI0501 00:14:28.274925 1022 log.go:172] (0xc000624f00) (3) Data frame handling\nI0501 00:14:28.274953 1022 log.go:172] (0xc000a9d3f0) Data frame received for 5\nI0501 00:14:28.274977 1022 log.go:172] (0xc000625220) (5) Data frame handling\nI0501 00:14:28.274994 1022 log.go:172] (0xc000625220) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31110\nConnection to 172.17.0.12 31110 port [tcp/31110] succeeded!\nI0501 00:14:28.275011 1022 log.go:172] (0xc000a9d3f0) Data frame received for 5\nI0501 00:14:28.275037 1022 log.go:172] (0xc000625220) (5) Data frame handling\nI0501 00:14:28.276507 1022 log.go:172] (0xc000a9d3f0) Data frame received for 1\nI0501 00:14:28.276528 1022 log.go:172] (0xc000a685a0) (1) Data frame handling\nI0501 00:14:28.276537 1022 log.go:172] (0xc000a685a0) (1) Data frame sent\nI0501 00:14:28.276548 1022 log.go:172] (0xc000a9d3f0) (0xc000a685a0) Stream removed, broadcasting: 1\nI0501 00:14:28.276559 1022 log.go:172] (0xc000a9d3f0) Go away received\nI0501 00:14:28.277007 1022 log.go:172] (0xc000a9d3f0) (0xc000a685a0) Stream removed, broadcasting: 1\nI0501 00:14:28.277027 1022 log.go:172] (0xc000a9d3f0) (0xc000624f00) Stream removed, broadcasting: 3\nI0501 00:14:28.277037 1022 log.go:172] (0xc000a9d3f0) (0xc000625220) Stream removed, broadcasting: 5\n" May 1 00:14:28.281: INFO: stdout: "" May 1 00:14:28.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-18 execpod-affinity6x9w9 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31110/ ; done' May 1 00:14:28.566: INFO: stderr: "I0501 00:14:28.418481 1041 log.go:172] (0xc000b9e2c0) (0xc0006134a0) Create stream\nI0501 00:14:28.418551 1041 log.go:172] (0xc000b9e2c0) (0xc0006134a0) Stream added, broadcasting: 1\nI0501 00:14:28.421429 1041 log.go:172] (0xc000b9e2c0) Reply frame received for 1\nI0501 00:14:28.421476 1041 log.go:172] (0xc000b9e2c0) (0xc00068c5a0) Create stream\nI0501 00:14:28.421492 1041 log.go:172] (0xc000b9e2c0) (0xc00068c5a0) Stream added, broadcasting: 3\nI0501 00:14:28.422482 1041 log.go:172] (0xc000b9e2c0) Reply frame received for 3\nI0501 00:14:28.422521 1041 log.go:172] (0xc000b9e2c0) (0xc000613b80) Create stream\nI0501 00:14:28.422531 1041 log.go:172] (0xc000b9e2c0) (0xc000613b80) Stream added, broadcasting: 5\nI0501 00:14:28.423368 1041 log.go:172] (0xc000b9e2c0) Reply frame received for 5\nI0501 00:14:28.472099 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.472142 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.472162 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.472192 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.472211 1041 log.go:172] (0xc000613b80) (5) Data frame handling\nI0501 00:14:28.472233 1041 log.go:172] (0xc000613b80) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31110/\nI0501 00:14:28.476203 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.476226 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.476244 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.476842 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.476864 1041 log.go:172] (0xc000613b80) (5) Data frame handling\nI0501 00:14:28.476879 1041 log.go:172] (0xc000613b80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31110/\nI0501 00:14:28.476923 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.476944 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.476960 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.483200 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.483219 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.483240 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.483976 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.483990 1041 log.go:172] (0xc000613b80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31110/\nI0501 00:14:28.484003 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.484024 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.484034 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.484062 1041 log.go:172] (0xc000613b80) (5) Data frame sent\nI0501 00:14:28.489257 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.489276 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.489285 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.489808 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.489822 1041 log.go:172] (0xc000613b80) (5) Data frame handling\nI0501 00:14:28.489834 1041 log.go:172] (0xc000613b80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31110/\nI0501 00:14:28.489945 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.489969 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.489998 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.493794 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.493810 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.493822 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.494213 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.494238 1041 log.go:172] (0xc000613b80) (5) Data frame handling\nI0501 00:14:28.494264 1041 log.go:172] (0xc000613b80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31110/\nI0501 00:14:28.494290 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.494302 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.494316 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.499144 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.499169 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.499188 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.499680 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.499708 1041 log.go:172] (0xc000613b80) (5) Data frame handling\nI0501 00:14:28.499722 1041 log.go:172] (0xc000613b80) (5) Data frame sent\nI0501 00:14:28.499746 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.499757 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.499768 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31110/\nI0501 00:14:28.504592 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.504653 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.504678 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.505058 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.505074 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.505105 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.505377 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.505392 1041 log.go:172] (0xc000613b80) (5) Data frame handling\nI0501 00:14:28.505401 1041 log.go:172] (0xc000613b80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31110/\nI0501 00:14:28.511669 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.511763 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.511820 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.513796 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.513815 1041 log.go:172] (0xc000613b80) (5) Data frame handling\nI0501 00:14:28.513838 1041 log.go:172] (0xc000613b80) (5) Data frame sent\nI0501 00:14:28.513851 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.513856 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.513862 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31110/\nI0501 00:14:28.517205 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.517225 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.517245 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.517683 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.517693 1041 log.go:172] (0xc000613b80) (5) Data frame handling\nI0501 00:14:28.517702 1041 log.go:172] (0xc000613b80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31110/\nI0501 00:14:28.517798 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.517810 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.517821 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.521751 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.521764 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.521777 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.522062 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.522080 1041 log.go:172] (0xc000613b80) (5) Data frame handling\nI0501 00:14:28.522092 1041 log.go:172] (0xc000613b80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31110/\nI0501 00:14:28.522134 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.522146 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.522155 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.526151 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.526163 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.526174 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.526723 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.526740 1041 log.go:172] (0xc000613b80) (5) Data frame handling\nI0501 00:14:28.526750 1041 log.go:172] (0xc000613b80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31110/\nI0501 00:14:28.526765 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.526785 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.526802 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.532515 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.532529 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.532541 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.533323 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.533341 1041 log.go:172] (0xc000613b80) (5) Data frame handling\nI0501 00:14:28.533358 1041 log.go:172] (0xc000613b80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31110/\nI0501 00:14:28.533530 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.533544 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.533570 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.537673 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.537693 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.537712 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.538241 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.538268 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.538294 1041 log.go:172] (0xc000613b80) (5) Data frame handling\nI0501 00:14:28.538306 1041 log.go:172] (0xc000613b80) (5) Data frame sent\nI0501 00:14:28.538313 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.538321 1041 log.go:172] (0xc000613b80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31110/\nI0501 00:14:28.538343 1041 log.go:172] (0xc000613b80) (5) Data frame sent\nI0501 00:14:28.538354 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.538376 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.542308 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.542330 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.542355 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.542676 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.542689 1041 log.go:172] (0xc000613b80) (5) Data frame handling\nI0501 00:14:28.542698 1041 log.go:172] (0xc000613b80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31110/\nI0501 00:14:28.542713 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.542733 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.542756 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.547495 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.547523 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.547551 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.548103 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.548117 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.548126 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.548144 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.548166 1041 log.go:172] (0xc000613b80) (5) Data frame handling\nI0501 00:14:28.548188 1041 log.go:172] (0xc000613b80) (5) Data frame sent\nI0501 00:14:28.548200 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.548220 1041 log.go:172] (0xc000613b80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31110/\nI0501 00:14:28.548250 1041 log.go:172] (0xc000613b80) (5) Data frame sent\nI0501 00:14:28.554452 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.554464 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.554472 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.554837 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.554856 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.554877 1041 log.go:172] (0xc000613b80) (5) Data frame handling\nI0501 00:14:28.554886 1041 log.go:172] (0xc000613b80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31110/\nI0501 00:14:28.554900 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.554910 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.559433 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.559463 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.559494 1041 log.go:172] (0xc00068c5a0) (3) Data frame sent\nI0501 00:14:28.559953 1041 log.go:172] (0xc000b9e2c0) Data frame received for 5\nI0501 00:14:28.559967 1041 log.go:172] (0xc000613b80) (5) Data frame handling\nI0501 00:14:28.559985 1041 log.go:172] (0xc000b9e2c0) Data frame received for 3\nI0501 00:14:28.560000 1041 log.go:172] (0xc00068c5a0) (3) Data frame handling\nI0501 00:14:28.561562 1041 log.go:172] (0xc000b9e2c0) Data frame received for 1\nI0501 00:14:28.561595 1041 log.go:172] (0xc0006134a0) (1) Data frame handling\nI0501 00:14:28.561620 1041 log.go:172] (0xc0006134a0) (1) Data frame sent\nI0501 00:14:28.561649 1041 log.go:172] (0xc000b9e2c0) (0xc0006134a0) Stream removed, broadcasting: 1\nI0501 00:14:28.561888 1041 log.go:172] (0xc000b9e2c0) Go away received\nI0501 00:14:28.562012 1041 log.go:172] (0xc000b9e2c0) (0xc0006134a0) Stream removed, broadcasting: 1\nI0501 00:14:28.562035 1041 log.go:172] (0xc000b9e2c0) (0xc00068c5a0) Stream removed, broadcasting: 3\nI0501 00:14:28.562054 1041 log.go:172] (0xc000b9e2c0) (0xc000613b80) Stream removed, broadcasting: 5\n" May 1 00:14:28.566: INFO: stdout: "\naffinity-nodeport-timeout-b8zdl\naffinity-nodeport-timeout-b8zdl\naffinity-nodeport-timeout-b8zdl\naffinity-nodeport-timeout-b8zdl\naffinity-nodeport-timeout-b8zdl\naffinity-nodeport-timeout-b8zdl\naffinity-nodeport-timeout-b8zdl\naffinity-nodeport-timeout-b8zdl\naffinity-nodeport-timeout-b8zdl\naffinity-nodeport-timeout-b8zdl\naffinity-nodeport-timeout-b8zdl\naffinity-nodeport-timeout-b8zdl\naffinity-nodeport-timeout-b8zdl\naffinity-nodeport-timeout-b8zdl\naffinity-nodeport-timeout-b8zdl\naffinity-nodeport-timeout-b8zdl" May 1 00:14:28.566: INFO: Received response from host: May 1 00:14:28.566: INFO: Received response from host: affinity-nodeport-timeout-b8zdl May 1 00:14:28.566: INFO: Received response from host: affinity-nodeport-timeout-b8zdl May 1 00:14:28.566: INFO: Received response from host: affinity-nodeport-timeout-b8zdl May 1 00:14:28.566: INFO: Received response from host: affinity-nodeport-timeout-b8zdl May 1 00:14:28.566: INFO: Received response from host: affinity-nodeport-timeout-b8zdl May 1 00:14:28.566: INFO: Received response from host: affinity-nodeport-timeout-b8zdl May 1 00:14:28.566: INFO: Received response from host: affinity-nodeport-timeout-b8zdl May 1 00:14:28.566: INFO: Received response from host: affinity-nodeport-timeout-b8zdl May 1 00:14:28.566: INFO: Received response from host: affinity-nodeport-timeout-b8zdl May 1 00:14:28.566: INFO: Received response from host: affinity-nodeport-timeout-b8zdl May 1 00:14:28.566: INFO: Received response from host: affinity-nodeport-timeout-b8zdl May 1 00:14:28.566: INFO: Received response from host: affinity-nodeport-timeout-b8zdl May 1 00:14:28.566: INFO: Received response from host: affinity-nodeport-timeout-b8zdl May 1 00:14:28.566: INFO: Received response from host: affinity-nodeport-timeout-b8zdl May 1 00:14:28.566: INFO: Received response from host: affinity-nodeport-timeout-b8zdl May 1 00:14:28.566: INFO: Received response from host: affinity-nodeport-timeout-b8zdl May 1 00:14:28.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-18 execpod-affinity6x9w9 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31110/' May 1 00:14:28.754: INFO: stderr: "I0501 00:14:28.677687 1063 log.go:172] (0xc000630160) (0xc0004adae0) Create stream\nI0501 00:14:28.677729 1063 log.go:172] (0xc000630160) (0xc0004adae0) Stream added, broadcasting: 1\nI0501 00:14:28.680154 1063 log.go:172] (0xc000630160) Reply frame received for 1\nI0501 00:14:28.680179 1063 log.go:172] (0xc000630160) (0xc000456f00) Create stream\nI0501 00:14:28.680189 1063 log.go:172] (0xc000630160) (0xc000456f00) Stream added, broadcasting: 3\nI0501 00:14:28.681322 1063 log.go:172] (0xc000630160) Reply frame received for 3\nI0501 00:14:28.681352 1063 log.go:172] (0xc000630160) (0xc0003ae000) Create stream\nI0501 00:14:28.681361 1063 log.go:172] (0xc000630160) (0xc0003ae000) Stream added, broadcasting: 5\nI0501 00:14:28.682181 1063 log.go:172] (0xc000630160) Reply frame received for 5\nI0501 00:14:28.741415 1063 log.go:172] (0xc000630160) Data frame received for 5\nI0501 00:14:28.741440 1063 log.go:172] (0xc0003ae000) (5) Data frame handling\nI0501 00:14:28.741457 1063 log.go:172] (0xc0003ae000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31110/\nI0501 00:14:28.746253 1063 log.go:172] (0xc000630160) Data frame received for 3\nI0501 00:14:28.746274 1063 log.go:172] (0xc000456f00) (3) Data frame handling\nI0501 00:14:28.746292 1063 log.go:172] (0xc000456f00) (3) Data frame sent\nI0501 00:14:28.746968 1063 log.go:172] (0xc000630160) Data frame received for 5\nI0501 00:14:28.746980 1063 log.go:172] (0xc0003ae000) (5) Data frame handling\nI0501 00:14:28.746996 1063 log.go:172] (0xc000630160) Data frame received for 3\nI0501 00:14:28.747002 1063 log.go:172] (0xc000456f00) (3) Data frame handling\nI0501 00:14:28.750133 1063 log.go:172] (0xc000630160) Data frame received for 1\nI0501 00:14:28.750149 1063 log.go:172] (0xc0004adae0) (1) Data frame handling\nI0501 00:14:28.750159 1063 log.go:172] (0xc0004adae0) (1) Data frame sent\nI0501 00:14:28.750172 1063 log.go:172] (0xc000630160) (0xc0004adae0) Stream removed, broadcasting: 1\nI0501 00:14:28.750255 1063 log.go:172] (0xc000630160) Go away received\nI0501 00:14:28.750461 1063 log.go:172] (0xc000630160) (0xc0004adae0) Stream removed, broadcasting: 1\nI0501 00:14:28.750479 1063 log.go:172] (0xc000630160) (0xc000456f00) Stream removed, broadcasting: 3\nI0501 00:14:28.750488 1063 log.go:172] (0xc000630160) (0xc0003ae000) Stream removed, broadcasting: 5\n" May 1 00:14:28.754: INFO: stdout: "affinity-nodeport-timeout-b8zdl" May 1 00:14:43.754: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-18 execpod-affinity6x9w9 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31110/' May 1 00:14:43.968: INFO: stderr: "I0501 00:14:43.877504 1083 log.go:172] (0xc00044c790) (0xc000405220) Create stream\nI0501 00:14:43.877555 1083 log.go:172] (0xc00044c790) (0xc000405220) Stream added, broadcasting: 1\nI0501 00:14:43.879586 1083 log.go:172] (0xc00044c790) Reply frame received for 1\nI0501 00:14:43.879620 1083 log.go:172] (0xc00044c790) (0xc000346dc0) Create stream\nI0501 00:14:43.879632 1083 log.go:172] (0xc00044c790) (0xc000346dc0) Stream added, broadcasting: 3\nI0501 00:14:43.880568 1083 log.go:172] (0xc00044c790) Reply frame received for 3\nI0501 00:14:43.880607 1083 log.go:172] (0xc00044c790) (0xc0004305a0) Create stream\nI0501 00:14:43.880622 1083 log.go:172] (0xc00044c790) (0xc0004305a0) Stream added, broadcasting: 5\nI0501 00:14:43.881644 1083 log.go:172] (0xc00044c790) Reply frame received for 5\nI0501 00:14:43.959176 1083 log.go:172] (0xc00044c790) Data frame received for 5\nI0501 00:14:43.959223 1083 log.go:172] (0xc0004305a0) (5) Data frame handling\nI0501 00:14:43.959280 1083 log.go:172] (0xc0004305a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31110/\nI0501 00:14:43.962497 1083 log.go:172] (0xc00044c790) Data frame received for 3\nI0501 00:14:43.962523 1083 log.go:172] (0xc000346dc0) (3) Data frame handling\nI0501 00:14:43.962562 1083 log.go:172] (0xc000346dc0) (3) Data frame sent\nI0501 00:14:43.962871 1083 log.go:172] (0xc00044c790) Data frame received for 5\nI0501 00:14:43.962892 1083 log.go:172] (0xc0004305a0) (5) Data frame handling\nI0501 00:14:43.962994 1083 log.go:172] (0xc00044c790) Data frame received for 3\nI0501 00:14:43.963023 1083 log.go:172] (0xc000346dc0) (3) Data frame handling\nI0501 00:14:43.964308 1083 log.go:172] (0xc00044c790) Data frame received for 1\nI0501 00:14:43.964320 1083 log.go:172] (0xc000405220) (1) Data frame handling\nI0501 00:14:43.964332 1083 log.go:172] (0xc000405220) (1) Data frame sent\nI0501 00:14:43.964355 1083 log.go:172] (0xc00044c790) (0xc000405220) Stream removed, broadcasting: 1\nI0501 00:14:43.964472 1083 log.go:172] (0xc00044c790) Go away received\nI0501 00:14:43.964615 1083 log.go:172] (0xc00044c790) (0xc000405220) Stream removed, broadcasting: 1\nI0501 00:14:43.964631 1083 log.go:172] (0xc00044c790) (0xc000346dc0) Stream removed, broadcasting: 3\nI0501 00:14:43.964638 1083 log.go:172] (0xc00044c790) (0xc0004305a0) Stream removed, broadcasting: 5\n" May 1 00:14:43.968: INFO: stdout: "affinity-nodeport-timeout-qvk2l" May 1 00:14:43.968: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-18, will wait for the garbage collector to delete the pods May 1 00:14:44.072: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 6.726666ms May 1 00:14:46.572: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 2.500255983s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:15:05.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-18" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:80.440 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":290,"completed":98,"skipped":1708,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:15:05.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:15:53.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1723" for this suite. • [SLOW TEST:47.926 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":290,"completed":99,"skipped":1732,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:15:53.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3411 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3411 I0501 00:15:53.540740 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3411, replica count: 2 I0501 00:15:56.591175 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 00:15:59.591408 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 1 00:15:59.591: INFO: Creating new exec pod May 1 00:16:04.639: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3411 execpod74d2g -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 1 00:16:04.891: INFO: stderr: "I0501 00:16:04.753652 1104 log.go:172] (0xc00069e160) (0xc00032f4a0) Create stream\nI0501 00:16:04.753692 1104 log.go:172] (0xc00069e160) (0xc00032f4a0) Stream added, broadcasting: 1\nI0501 00:16:04.755777 1104 log.go:172] (0xc00069e160) Reply frame received for 1\nI0501 00:16:04.755809 1104 log.go:172] (0xc00069e160) (0xc0000dcdc0) Create stream\nI0501 00:16:04.755822 1104 log.go:172] (0xc00069e160) (0xc0000dcdc0) Stream added, broadcasting: 3\nI0501 00:16:04.756622 1104 log.go:172] (0xc00069e160) Reply frame received for 3\nI0501 00:16:04.756666 1104 log.go:172] (0xc00069e160) (0xc00032fc20) Create stream\nI0501 00:16:04.756681 1104 log.go:172] (0xc00069e160) (0xc00032fc20) Stream added, broadcasting: 5\nI0501 00:16:04.757619 1104 log.go:172] (0xc00069e160) Reply frame received for 5\nI0501 00:16:04.886186 1104 log.go:172] (0xc00069e160) Data frame received for 5\nI0501 00:16:04.886208 1104 log.go:172] (0xc00032fc20) (5) Data frame handling\nI0501 00:16:04.886222 1104 log.go:172] (0xc00032fc20) (5) Data frame sent\nI0501 00:16:04.886229 1104 log.go:172] (0xc00069e160) Data frame received for 5\nI0501 00:16:04.886235 1104 log.go:172] (0xc00032fc20) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0501 00:16:04.886339 1104 log.go:172] (0xc00069e160) Data frame received for 3\nI0501 00:16:04.886355 1104 log.go:172] (0xc0000dcdc0) (3) Data frame handling\nI0501 00:16:04.887763 1104 log.go:172] (0xc00069e160) Data frame received for 1\nI0501 00:16:04.887784 1104 log.go:172] (0xc00032f4a0) (1) Data frame handling\nI0501 00:16:04.887803 1104 log.go:172] (0xc00032f4a0) (1) Data frame sent\nI0501 00:16:04.887823 1104 log.go:172] (0xc00069e160) (0xc00032f4a0) Stream removed, broadcasting: 1\nI0501 00:16:04.887840 1104 log.go:172] (0xc00069e160) Go away received\nI0501 00:16:04.888081 1104 log.go:172] (0xc00069e160) (0xc00032f4a0) Stream removed, broadcasting: 1\nI0501 00:16:04.888094 1104 log.go:172] (0xc00069e160) (0xc0000dcdc0) Stream removed, broadcasting: 3\nI0501 00:16:04.888102 1104 log.go:172] (0xc00069e160) (0xc00032fc20) Stream removed, broadcasting: 5\n" May 1 00:16:04.891: INFO: stdout: "" May 1 00:16:04.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3411 execpod74d2g -- /bin/sh -x -c nc -zv -t -w 2 10.100.83.246 80' May 1 00:16:05.081: INFO: stderr: "I0501 00:16:05.011654 1124 log.go:172] (0xc0009ccc60) (0xc0000c3040) Create stream\nI0501 00:16:05.011704 1124 log.go:172] (0xc0009ccc60) (0xc0000c3040) Stream added, broadcasting: 1\nI0501 00:16:05.013513 1124 log.go:172] (0xc0009ccc60) Reply frame received for 1\nI0501 00:16:05.013560 1124 log.go:172] (0xc0009ccc60) (0xc00035d2c0) Create stream\nI0501 00:16:05.013580 1124 log.go:172] (0xc0009ccc60) (0xc00035d2c0) Stream added, broadcasting: 3\nI0501 00:16:05.014505 1124 log.go:172] (0xc0009ccc60) Reply frame received for 3\nI0501 00:16:05.014525 1124 log.go:172] (0xc0009ccc60) (0xc0000c35e0) Create stream\nI0501 00:16:05.014532 1124 log.go:172] (0xc0009ccc60) (0xc0000c35e0) Stream added, broadcasting: 5\nI0501 00:16:05.015297 1124 log.go:172] (0xc0009ccc60) Reply frame received for 5\nI0501 00:16:05.075984 1124 log.go:172] (0xc0009ccc60) Data frame received for 5\nI0501 00:16:05.076024 1124 log.go:172] (0xc0009ccc60) Data frame received for 3\nI0501 00:16:05.076058 1124 log.go:172] (0xc00035d2c0) (3) Data frame handling\nI0501 00:16:05.076085 1124 log.go:172] (0xc0000c35e0) (5) Data frame handling\nI0501 00:16:05.076101 1124 log.go:172] (0xc0000c35e0) (5) Data frame sent\nI0501 00:16:05.076112 1124 log.go:172] (0xc0009ccc60) Data frame received for 5\nI0501 00:16:05.076123 1124 log.go:172] (0xc0000c35e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.83.246 80\nConnection to 10.100.83.246 80 port [tcp/http] succeeded!\nI0501 00:16:05.077582 1124 log.go:172] (0xc0009ccc60) Data frame received for 1\nI0501 00:16:05.077592 1124 log.go:172] (0xc0000c3040) (1) Data frame handling\nI0501 00:16:05.077598 1124 log.go:172] (0xc0000c3040) (1) Data frame sent\nI0501 00:16:05.077606 1124 log.go:172] (0xc0009ccc60) (0xc0000c3040) Stream removed, broadcasting: 1\nI0501 00:16:05.077616 1124 log.go:172] (0xc0009ccc60) Go away received\nI0501 00:16:05.078001 1124 log.go:172] (0xc0009ccc60) (0xc0000c3040) Stream removed, broadcasting: 1\nI0501 00:16:05.078030 1124 log.go:172] (0xc0009ccc60) (0xc00035d2c0) Stream removed, broadcasting: 3\nI0501 00:16:05.078044 1124 log.go:172] (0xc0009ccc60) (0xc0000c35e0) Stream removed, broadcasting: 5\n" May 1 00:16:05.081: INFO: stdout: "" May 1 00:16:05.081: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3411 execpod74d2g -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30949' May 1 00:16:05.282: INFO: stderr: "I0501 00:16:05.211106 1140 log.go:172] (0xc00003a420) (0xc00030ab40) Create stream\nI0501 00:16:05.211155 1140 log.go:172] (0xc00003a420) (0xc00030ab40) Stream added, broadcasting: 1\nI0501 00:16:05.213271 1140 log.go:172] (0xc00003a420) Reply frame received for 1\nI0501 00:16:05.213298 1140 log.go:172] (0xc00003a420) (0xc0000ddea0) Create stream\nI0501 00:16:05.213310 1140 log.go:172] (0xc00003a420) (0xc0000ddea0) Stream added, broadcasting: 3\nI0501 00:16:05.214130 1140 log.go:172] (0xc00003a420) Reply frame received for 3\nI0501 00:16:05.214169 1140 log.go:172] (0xc00003a420) (0xc0003881e0) Create stream\nI0501 00:16:05.214188 1140 log.go:172] (0xc00003a420) (0xc0003881e0) Stream added, broadcasting: 5\nI0501 00:16:05.214818 1140 log.go:172] (0xc00003a420) Reply frame received for 5\nI0501 00:16:05.273800 1140 log.go:172] (0xc00003a420) Data frame received for 5\nI0501 00:16:05.273964 1140 log.go:172] (0xc0003881e0) (5) Data frame handling\nI0501 00:16:05.274011 1140 log.go:172] (0xc0003881e0) (5) Data frame sent\nI0501 00:16:05.274075 1140 log.go:172] (0xc00003a420) Data frame received for 5\nI0501 00:16:05.274116 1140 log.go:172] (0xc0003881e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30949\nConnection to 172.17.0.13 30949 port [tcp/30949] succeeded!\nI0501 00:16:05.274403 1140 log.go:172] (0xc00003a420) Data frame received for 3\nI0501 00:16:05.274423 1140 log.go:172] (0xc0000ddea0) (3) Data frame handling\nI0501 00:16:05.278682 1140 log.go:172] (0xc00003a420) Data frame received for 1\nI0501 00:16:05.278698 1140 log.go:172] (0xc00030ab40) (1) Data frame handling\nI0501 00:16:05.278704 1140 log.go:172] (0xc00030ab40) (1) Data frame sent\nI0501 00:16:05.278719 1140 log.go:172] (0xc00003a420) (0xc00030ab40) Stream removed, broadcasting: 1\nI0501 00:16:05.278761 1140 log.go:172] (0xc00003a420) Go away received\nI0501 00:16:05.278946 1140 log.go:172] (0xc00003a420) (0xc00030ab40) Stream removed, broadcasting: 1\nI0501 00:16:05.278957 1140 log.go:172] (0xc00003a420) (0xc0000ddea0) Stream removed, broadcasting: 3\nI0501 00:16:05.278963 1140 log.go:172] (0xc00003a420) (0xc0003881e0) Stream removed, broadcasting: 5\n" May 1 00:16:05.282: INFO: stdout: "" May 1 00:16:05.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3411 execpod74d2g -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30949' May 1 00:16:05.498: INFO: stderr: "I0501 00:16:05.414143 1161 log.go:172] (0xc000a34d10) (0xc00042efa0) Create stream\nI0501 00:16:05.414189 1161 log.go:172] (0xc000a34d10) (0xc00042efa0) Stream added, broadcasting: 1\nI0501 00:16:05.416447 1161 log.go:172] (0xc000a34d10) Reply frame received for 1\nI0501 00:16:05.416478 1161 log.go:172] (0xc000a34d10) (0xc00023a000) Create stream\nI0501 00:16:05.416489 1161 log.go:172] (0xc000a34d10) (0xc00023a000) Stream added, broadcasting: 3\nI0501 00:16:05.417655 1161 log.go:172] (0xc000a34d10) Reply frame received for 3\nI0501 00:16:05.417684 1161 log.go:172] (0xc000a34d10) (0xc00042f220) Create stream\nI0501 00:16:05.417691 1161 log.go:172] (0xc000a34d10) (0xc00042f220) Stream added, broadcasting: 5\nI0501 00:16:05.418584 1161 log.go:172] (0xc000a34d10) Reply frame received for 5\nI0501 00:16:05.489238 1161 log.go:172] (0xc000a34d10) Data frame received for 5\nI0501 00:16:05.489266 1161 log.go:172] (0xc00042f220) (5) Data frame handling\nI0501 00:16:05.489279 1161 log.go:172] (0xc00042f220) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 30949\nI0501 00:16:05.489898 1161 log.go:172] (0xc000a34d10) Data frame received for 5\nI0501 00:16:05.489916 1161 log.go:172] (0xc00042f220) (5) Data frame handling\nI0501 00:16:05.489932 1161 log.go:172] (0xc00042f220) (5) Data frame sent\nConnection to 172.17.0.12 30949 port [tcp/30949] succeeded!\nI0501 00:16:05.490312 1161 log.go:172] (0xc000a34d10) Data frame received for 5\nI0501 00:16:05.490343 1161 log.go:172] (0xc00042f220) (5) Data frame handling\nI0501 00:16:05.490368 1161 log.go:172] (0xc000a34d10) Data frame received for 3\nI0501 00:16:05.490428 1161 log.go:172] (0xc00023a000) (3) Data frame handling\nI0501 00:16:05.491337 1161 log.go:172] (0xc000a34d10) Data frame received for 1\nI0501 00:16:05.491351 1161 log.go:172] (0xc00042efa0) (1) Data frame handling\nI0501 00:16:05.491359 1161 log.go:172] (0xc00042efa0) (1) Data frame sent\nI0501 00:16:05.491368 1161 log.go:172] (0xc000a34d10) (0xc00042efa0) Stream removed, broadcasting: 1\nI0501 00:16:05.491656 1161 log.go:172] (0xc000a34d10) (0xc00042efa0) Stream removed, broadcasting: 1\nI0501 00:16:05.491669 1161 log.go:172] (0xc000a34d10) (0xc00023a000) Stream removed, broadcasting: 3\nI0501 00:16:05.491676 1161 log.go:172] (0xc000a34d10) (0xc00042f220) Stream removed, broadcasting: 5\n" May 1 00:16:05.498: INFO: stdout: "" May 1 00:16:05.498: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:16:05.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3411" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.292 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":290,"completed":100,"skipped":1753,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:16:05.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-900db131-452a-45c0-a980-3bbb4962d64f STEP: Creating a pod to test consume configMaps May 1 00:16:05.694: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a9956ee3-19be-49dc-89af-ea3078713d85" in namespace "projected-6624" to be "Succeeded or Failed" May 1 00:16:05.712: INFO: Pod "pod-projected-configmaps-a9956ee3-19be-49dc-89af-ea3078713d85": Phase="Pending", Reason="", readiness=false. Elapsed: 18.331425ms May 1 00:16:07.716: INFO: Pod "pod-projected-configmaps-a9956ee3-19be-49dc-89af-ea3078713d85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021891336s May 1 00:16:09.720: INFO: Pod "pod-projected-configmaps-a9956ee3-19be-49dc-89af-ea3078713d85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025800467s STEP: Saw pod success May 1 00:16:09.720: INFO: Pod "pod-projected-configmaps-a9956ee3-19be-49dc-89af-ea3078713d85" satisfied condition "Succeeded or Failed" May 1 00:16:09.723: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-a9956ee3-19be-49dc-89af-ea3078713d85 container projected-configmap-volume-test: STEP: delete the pod May 1 00:16:09.944: INFO: Waiting for pod pod-projected-configmaps-a9956ee3-19be-49dc-89af-ea3078713d85 to disappear May 1 00:16:09.965: INFO: Pod pod-projected-configmaps-a9956ee3-19be-49dc-89af-ea3078713d85 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:16:09.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6624" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":290,"completed":101,"skipped":1757,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:16:09.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0501 00:16:20.361971 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 00:16:20.362: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:16:20.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9161" for this suite. • [SLOW TEST:10.395 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":290,"completed":102,"skipped":1801,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:16:20.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 00:16:21.019: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 00:16:25.151: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888981, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888981, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888981, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888980, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 00:16:27.155: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888981, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888981, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888981, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888980, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 00:16:30.215: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:16:31.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3116" for this suite. STEP: Destroying namespace "webhook-3116-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.832 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":290,"completed":103,"skipped":1858,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:16:31.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 1 00:16:32.220: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 1 00:16:34.949: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 1 00:16:37.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888994, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888994, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888995, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888994, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 00:16:39.441: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888994, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888994, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888995, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888994, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 00:16:41.200: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888994, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888994, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888995, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888994, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 00:16:43.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888994, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888994, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888995, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723888994, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 00:16:45.561: INFO: Waited 515.482675ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:16:46.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4052" for this suite. • [SLOW TEST:15.413 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":290,"completed":104,"skipped":1870,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:16:46.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-5daf7f7a-f27f-4db7-825f-ec677a423a1a STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-5daf7f7a-f27f-4db7-825f-ec677a423a1a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:18:18.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8716" for this suite. • [SLOW TEST:91.944 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":290,"completed":105,"skipped":1881,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:18:18.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-2e2a7fe6-7f82-4fef-9139-bad806f3b726 STEP: Creating a pod to test consume secrets May 1 00:18:18.723: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-aab004b4-5466-4ab1-b76a-446fc4f46dce" in namespace "projected-9193" to be "Succeeded or Failed" May 1 00:18:18.757: INFO: Pod "pod-projected-secrets-aab004b4-5466-4ab1-b76a-446fc4f46dce": Phase="Pending", Reason="", readiness=false. Elapsed: 34.230271ms May 1 00:18:20.761: INFO: Pod "pod-projected-secrets-aab004b4-5466-4ab1-b76a-446fc4f46dce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037552569s May 1 00:18:23.199: INFO: Pod "pod-projected-secrets-aab004b4-5466-4ab1-b76a-446fc4f46dce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475991853s May 1 00:18:25.318: INFO: Pod "pod-projected-secrets-aab004b4-5466-4ab1-b76a-446fc4f46dce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.594479981s May 1 00:18:27.387: INFO: Pod "pod-projected-secrets-aab004b4-5466-4ab1-b76a-446fc4f46dce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.663591988s May 1 00:18:29.390: INFO: Pod "pod-projected-secrets-aab004b4-5466-4ab1-b76a-446fc4f46dce": Phase="Pending", Reason="", readiness=false. Elapsed: 10.666499705s May 1 00:18:31.783: INFO: Pod "pod-projected-secrets-aab004b4-5466-4ab1-b76a-446fc4f46dce": Phase="Pending", Reason="", readiness=false. Elapsed: 13.059646074s May 1 00:18:33.833: INFO: Pod "pod-projected-secrets-aab004b4-5466-4ab1-b76a-446fc4f46dce": Phase="Pending", Reason="", readiness=false. Elapsed: 15.109899352s May 1 00:18:35.901: INFO: Pod "pod-projected-secrets-aab004b4-5466-4ab1-b76a-446fc4f46dce": Phase="Pending", Reason="", readiness=false. Elapsed: 17.178322062s May 1 00:18:37.905: INFO: Pod "pod-projected-secrets-aab004b4-5466-4ab1-b76a-446fc4f46dce": Phase="Pending", Reason="", readiness=false. Elapsed: 19.181752158s May 1 00:18:39.908: INFO: Pod "pod-projected-secrets-aab004b4-5466-4ab1-b76a-446fc4f46dce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.185017055s STEP: Saw pod success May 1 00:18:39.908: INFO: Pod "pod-projected-secrets-aab004b4-5466-4ab1-b76a-446fc4f46dce" satisfied condition "Succeeded or Failed" May 1 00:18:39.910: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-aab004b4-5466-4ab1-b76a-446fc4f46dce container projected-secret-volume-test: STEP: delete the pod May 1 00:18:39.958: INFO: Waiting for pod pod-projected-secrets-aab004b4-5466-4ab1-b76a-446fc4f46dce to disappear May 1 00:18:39.982: INFO: Pod pod-projected-secrets-aab004b4-5466-4ab1-b76a-446fc4f46dce no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:18:39.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9193" for this suite. • [SLOW TEST:21.453 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":106,"skipped":1883,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:18:40.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 1 00:18:40.168: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:18:40.173: INFO: Number of nodes with available pods: 0 May 1 00:18:40.173: INFO: Node latest-worker is running more than one daemon pod May 1 00:18:41.261: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:18:41.263: INFO: Number of nodes with available pods: 0 May 1 00:18:41.263: INFO: Node latest-worker is running more than one daemon pod May 1 00:18:42.179: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:18:42.181: INFO: Number of nodes with available pods: 0 May 1 00:18:42.181: INFO: Node latest-worker is running more than one daemon pod May 1 00:18:43.429: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:18:43.484: INFO: Number of nodes with available pods: 0 May 1 00:18:43.484: INFO: Node latest-worker is running more than one daemon pod May 1 00:18:44.177: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:18:44.180: INFO: Number of nodes with available pods: 0 May 1 00:18:44.180: INFO: Node latest-worker is running more than one daemon pod May 1 00:18:45.196: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:18:45.199: INFO: Number of nodes with available pods: 2 May 1 00:18:45.199: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 1 00:18:45.263: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:18:45.363: INFO: Number of nodes with available pods: 1 May 1 00:18:45.363: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:18:46.371: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:18:46.374: INFO: Number of nodes with available pods: 1 May 1 00:18:46.374: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:18:47.366: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:18:47.370: INFO: Number of nodes with available pods: 1 May 1 00:18:47.370: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:18:48.367: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:18:48.371: INFO: Number of nodes with available pods: 1 May 1 00:18:48.371: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:18:49.367: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:18:49.370: INFO: Number of nodes with available pods: 2 May 1 00:18:49.370: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2803, will wait for the garbage collector to delete the pods May 1 00:18:49.432: INFO: Deleting DaemonSet.extensions daemon-set took: 5.322598ms May 1 00:18:49.732: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.261408ms May 1 00:18:55.336: INFO: Number of nodes with available pods: 0 May 1 00:18:55.336: INFO: Number of running nodes: 0, number of available pods: 0 May 1 00:18:55.339: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2803/daemonsets","resourceVersion":"452249"},"items":null} May 1 00:18:55.341: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2803/pods","resourceVersion":"452249"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:18:55.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2803" for this suite. • [SLOW TEST:15.362 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":290,"completed":107,"skipped":1923,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:18:55.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 1 00:18:55.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5670' May 1 00:18:55.728: INFO: stderr: "" May 1 00:18:55.728: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 1 00:18:56.732: INFO: Selector matched 1 pods for map[app:agnhost] May 1 00:18:56.732: INFO: Found 0 / 1 May 1 00:18:57.732: INFO: Selector matched 1 pods for map[app:agnhost] May 1 00:18:57.732: INFO: Found 0 / 1 May 1 00:18:58.733: INFO: Selector matched 1 pods for map[app:agnhost] May 1 00:18:58.733: INFO: Found 0 / 1 May 1 00:18:59.732: INFO: Selector matched 1 pods for map[app:agnhost] May 1 00:18:59.732: INFO: Found 1 / 1 May 1 00:18:59.732: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 1 00:18:59.736: INFO: Selector matched 1 pods for map[app:agnhost] May 1 00:18:59.736: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 1 00:18:59.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-fq8xr --namespace=kubectl-5670 -p {"metadata":{"annotations":{"x":"y"}}}' May 1 00:18:59.853: INFO: stderr: "" May 1 00:18:59.853: INFO: stdout: "pod/agnhost-master-fq8xr patched\n" STEP: checking annotations May 1 00:18:59.908: INFO: Selector matched 1 pods for map[app:agnhost] May 1 00:18:59.908: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:18:59.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5670" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":290,"completed":108,"skipped":1925,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:18:59.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 1 00:19:00.001: INFO: Waiting up to 5m0s for pod "pod-e53c3272-adaa-458a-af85-2fdfc3d7168a" in namespace "emptydir-1576" to be "Succeeded or Failed" May 1 00:19:00.005: INFO: Pod "pod-e53c3272-adaa-458a-af85-2fdfc3d7168a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.82978ms May 1 00:19:02.244: INFO: Pod "pod-e53c3272-adaa-458a-af85-2fdfc3d7168a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242341921s May 1 00:19:04.249: INFO: Pod "pod-e53c3272-adaa-458a-af85-2fdfc3d7168a": Phase="Running", Reason="", readiness=true. Elapsed: 4.247538654s May 1 00:19:06.309: INFO: Pod "pod-e53c3272-adaa-458a-af85-2fdfc3d7168a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.308182531s STEP: Saw pod success May 1 00:19:06.309: INFO: Pod "pod-e53c3272-adaa-458a-af85-2fdfc3d7168a" satisfied condition "Succeeded or Failed" May 1 00:19:06.312: INFO: Trying to get logs from node latest-worker2 pod pod-e53c3272-adaa-458a-af85-2fdfc3d7168a container test-container: STEP: delete the pod May 1 00:19:06.504: INFO: Waiting for pod pod-e53c3272-adaa-458a-af85-2fdfc3d7168a to disappear May 1 00:19:06.516: INFO: Pod pod-e53c3272-adaa-458a-af85-2fdfc3d7168a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:19:06.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1576" for this suite. • [SLOW TEST:6.606 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":109,"skipped":1964,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:19:06.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 00:19:08.128: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 00:19:10.138: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889148, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889148, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889148, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889148, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 00:19:12.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889148, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889148, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889148, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889148, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 00:19:15.201: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:19:15.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-830" for this suite. STEP: Destroying namespace "webhook-830-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.943 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":290,"completed":110,"skipped":1976,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:19:15.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-fpl6 STEP: Creating a pod to test atomic-volume-subpath May 1 00:19:15.536: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-fpl6" in namespace "subpath-181" to be "Succeeded or Failed" May 1 00:19:15.583: INFO: Pod "pod-subpath-test-downwardapi-fpl6": Phase="Pending", Reason="", readiness=false. Elapsed: 46.815396ms May 1 00:19:17.588: INFO: Pod "pod-subpath-test-downwardapi-fpl6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051643725s May 1 00:19:19.894: INFO: Pod "pod-subpath-test-downwardapi-fpl6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.358476893s May 1 00:19:21.898: INFO: Pod "pod-subpath-test-downwardapi-fpl6": Phase="Running", Reason="", readiness=true. Elapsed: 6.362547485s May 1 00:19:23.902: INFO: Pod "pod-subpath-test-downwardapi-fpl6": Phase="Running", Reason="", readiness=true. Elapsed: 8.366362464s May 1 00:19:25.941: INFO: Pod "pod-subpath-test-downwardapi-fpl6": Phase="Running", Reason="", readiness=true. Elapsed: 10.405199566s May 1 00:19:27.945: INFO: Pod "pod-subpath-test-downwardapi-fpl6": Phase="Running", Reason="", readiness=true. Elapsed: 12.408956597s May 1 00:19:29.949: INFO: Pod "pod-subpath-test-downwardapi-fpl6": Phase="Running", Reason="", readiness=true. Elapsed: 14.412811728s May 1 00:19:31.953: INFO: Pod "pod-subpath-test-downwardapi-fpl6": Phase="Running", Reason="", readiness=true. Elapsed: 16.416959211s May 1 00:19:33.956: INFO: Pod "pod-subpath-test-downwardapi-fpl6": Phase="Running", Reason="", readiness=true. Elapsed: 18.420411375s May 1 00:19:35.960: INFO: Pod "pod-subpath-test-downwardapi-fpl6": Phase="Running", Reason="", readiness=true. Elapsed: 20.424146376s May 1 00:19:37.964: INFO: Pod "pod-subpath-test-downwardapi-fpl6": Phase="Running", Reason="", readiness=true. Elapsed: 22.427560353s May 1 00:19:39.967: INFO: Pod "pod-subpath-test-downwardapi-fpl6": Phase="Running", Reason="", readiness=true. Elapsed: 24.431471953s May 1 00:19:41.972: INFO: Pod "pod-subpath-test-downwardapi-fpl6": Phase="Running", Reason="", readiness=true. Elapsed: 26.435636807s May 1 00:19:43.975: INFO: Pod "pod-subpath-test-downwardapi-fpl6": Phase="Running", Reason="", readiness=true. Elapsed: 28.439178394s May 1 00:19:45.980: INFO: Pod "pod-subpath-test-downwardapi-fpl6": Phase="Running", Reason="", readiness=true. Elapsed: 30.443565704s May 1 00:19:47.984: INFO: Pod "pod-subpath-test-downwardapi-fpl6": Phase="Running", Reason="", readiness=true. Elapsed: 32.447858535s May 1 00:19:49.988: INFO: Pod "pod-subpath-test-downwardapi-fpl6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.451736519s STEP: Saw pod success May 1 00:19:49.988: INFO: Pod "pod-subpath-test-downwardapi-fpl6" satisfied condition "Succeeded or Failed" May 1 00:19:49.990: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-fpl6 container test-container-subpath-downwardapi-fpl6: STEP: delete the pod May 1 00:19:50.253: INFO: Waiting for pod pod-subpath-test-downwardapi-fpl6 to disappear May 1 00:19:50.531: INFO: Pod pod-subpath-test-downwardapi-fpl6 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-fpl6 May 1 00:19:50.531: INFO: Deleting pod "pod-subpath-test-downwardapi-fpl6" in namespace "subpath-181" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:19:50.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-181" for this suite. • [SLOW TEST:35.084 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":290,"completed":111,"skipped":1990,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:19:50.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 00:19:51.047: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 00:19:53.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889191, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889191, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889191, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889190, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 00:19:55.894: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889191, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889191, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889191, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889190, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 00:19:58.645: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:19:58.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7004" for this suite. STEP: Destroying namespace "webhook-7004-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.396 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":290,"completed":112,"skipped":1995,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:19:58.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 00:20:01.885: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 00:20:04.119: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889201, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889201, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889202, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889200, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 00:20:06.141: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889201, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889201, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889202, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889200, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 00:20:08.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889201, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889201, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889202, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723889200, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 00:20:11.171: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:20:11.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2920" for this suite. STEP: Destroying namespace "webhook-2920-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.432 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":290,"completed":113,"skipped":2003,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:20:11.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 1 00:20:17.197: INFO: Pod name wrapped-volume-race-470871e5-fb7a-45b8-986d-4de658988c05: Found 0 pods out of 5 May 1 00:20:22.532: INFO: Pod name wrapped-volume-race-470871e5-fb7a-45b8-986d-4de658988c05: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-470871e5-fb7a-45b8-986d-4de658988c05 in namespace emptydir-wrapper-5891, will wait for the garbage collector to delete the pods May 1 00:20:50.887: INFO: Deleting ReplicationController wrapped-volume-race-470871e5-fb7a-45b8-986d-4de658988c05 took: 19.247624ms May 1 00:20:51.287: INFO: Terminating ReplicationController wrapped-volume-race-470871e5-fb7a-45b8-986d-4de658988c05 pods took: 400.203376ms STEP: Creating RC which spawns configmap-volume pods May 1 00:21:05.051: INFO: Pod name wrapped-volume-race-d504fdff-7a91-4774-824b-15f18a1d3e3e: Found 0 pods out of 5 May 1 00:21:10.074: INFO: Pod name wrapped-volume-race-d504fdff-7a91-4774-824b-15f18a1d3e3e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d504fdff-7a91-4774-824b-15f18a1d3e3e in namespace emptydir-wrapper-5891, will wait for the garbage collector to delete the pods May 1 00:21:26.248: INFO: Deleting ReplicationController wrapped-volume-race-d504fdff-7a91-4774-824b-15f18a1d3e3e took: 100.146703ms May 1 00:21:26.649: INFO: Terminating ReplicationController wrapped-volume-race-d504fdff-7a91-4774-824b-15f18a1d3e3e pods took: 400.374615ms STEP: Creating RC which spawns configmap-volume pods May 1 00:21:35.406: INFO: Pod name wrapped-volume-race-c69d3849-5a20-41f0-abc6-6aea7c0ebf03: Found 0 pods out of 5 May 1 00:21:40.414: INFO: Pod name wrapped-volume-race-c69d3849-5a20-41f0-abc6-6aea7c0ebf03: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c69d3849-5a20-41f0-abc6-6aea7c0ebf03 in namespace emptydir-wrapper-5891, will wait for the garbage collector to delete the pods May 1 00:22:30.488: INFO: Deleting ReplicationController wrapped-volume-race-c69d3849-5a20-41f0-abc6-6aea7c0ebf03 took: 6.332329ms May 1 00:22:30.889: INFO: Terminating ReplicationController wrapped-volume-race-c69d3849-5a20-41f0-abc6-6aea7c0ebf03 pods took: 400.434314ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:22:55.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5891" for this suite. • [SLOW TEST:164.575 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":290,"completed":114,"skipped":2009,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:22:55.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-394 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-394 STEP: Creating statefulset with conflicting port in namespace statefulset-394 STEP: Waiting until pod test-pod will start running in namespace statefulset-394 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-394 May 1 00:23:02.163: INFO: Observed stateful pod in namespace: statefulset-394, name: ss-0, uid: 3029dbdd-eb65-4611-8c7f-d0a6d3154d87, status phase: Pending. Waiting for statefulset controller to delete. May 1 00:23:02.502: INFO: Observed stateful pod in namespace: statefulset-394, name: ss-0, uid: 3029dbdd-eb65-4611-8c7f-d0a6d3154d87, status phase: Failed. Waiting for statefulset controller to delete. May 1 00:23:02.538: INFO: Observed stateful pod in namespace: statefulset-394, name: ss-0, uid: 3029dbdd-eb65-4611-8c7f-d0a6d3154d87, status phase: Failed. Waiting for statefulset controller to delete. May 1 00:23:02.599: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-394 STEP: Removing pod with conflicting port in namespace statefulset-394 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-394 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 1 00:23:08.865: INFO: Deleting all statefulset in ns statefulset-394 May 1 00:23:08.867: INFO: Scaling statefulset ss to 0 May 1 00:23:18.886: INFO: Waiting for statefulset status.replicas updated to 0 May 1 00:23:18.897: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:23:18.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-394" for this suite. • [SLOW TEST:23.031 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":290,"completed":115,"skipped":2009,"failed":0} [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:23:18.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 1 00:23:27.645: INFO: Successfully updated pod "annotationupdate74fceee6-6266-4eea-8c02-e3e2caf00cb0" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:23:31.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4684" for this suite. • [SLOW TEST:12.711 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":290,"completed":116,"skipped":2009,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:23:31.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-46b1443e-207c-4d0c-b98e-2a6994d33ff8 STEP: updating the pod May 1 00:24:02.303: INFO: Successfully updated pod "var-expansion-46b1443e-207c-4d0c-b98e-2a6994d33ff8" STEP: waiting for pod and container restart STEP: Failing liveness probe May 1 00:24:02.344: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-8727 PodName:var-expansion-46b1443e-207c-4d0c-b98e-2a6994d33ff8 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:24:02.344: INFO: >>> kubeConfig: /root/.kube/config I0501 00:24:02.374132 7 log.go:172] (0xc0025800b0) (0xc000fa0320) Create stream I0501 00:24:02.374158 7 log.go:172] (0xc0025800b0) (0xc000fa0320) Stream added, broadcasting: 1 I0501 00:24:02.375431 7 log.go:172] (0xc0025800b0) Reply frame received for 1 I0501 00:24:02.375461 7 log.go:172] (0xc0025800b0) (0xc000fa0500) Create stream I0501 00:24:02.375472 7 log.go:172] (0xc0025800b0) (0xc000fa0500) Stream added, broadcasting: 3 I0501 00:24:02.376262 7 log.go:172] (0xc0025800b0) Reply frame received for 3 I0501 00:24:02.376294 7 log.go:172] (0xc0025800b0) (0xc000f400a0) Create stream I0501 00:24:02.376307 7 log.go:172] (0xc0025800b0) (0xc000f400a0) Stream added, broadcasting: 5 I0501 00:24:02.376973 7 log.go:172] (0xc0025800b0) Reply frame received for 5 I0501 00:24:02.475407 7 log.go:172] (0xc0025800b0) Data frame received for 3 I0501 00:24:02.475449 7 log.go:172] (0xc000fa0500) (3) Data frame handling I0501 00:24:02.475497 7 log.go:172] (0xc0025800b0) Data frame received for 5 I0501 00:24:02.475524 7 log.go:172] (0xc000f400a0) (5) Data frame handling I0501 00:24:02.476540 7 log.go:172] (0xc0025800b0) Data frame received for 1 I0501 00:24:02.476562 7 log.go:172] (0xc000fa0320) (1) Data frame handling I0501 00:24:02.476581 7 log.go:172] (0xc000fa0320) (1) Data frame sent I0501 00:24:02.476607 7 log.go:172] (0xc0025800b0) (0xc000fa0320) Stream removed, broadcasting: 1 I0501 00:24:02.476629 7 log.go:172] (0xc0025800b0) Go away received I0501 00:24:02.476726 7 log.go:172] (0xc0025800b0) (0xc000fa0320) Stream removed, broadcasting: 1 I0501 00:24:02.476753 7 log.go:172] (0xc0025800b0) (0xc000fa0500) Stream removed, broadcasting: 3 I0501 00:24:02.476777 7 log.go:172] (0xc0025800b0) (0xc000f400a0) Stream removed, broadcasting: 5 May 1 00:24:02.476: INFO: Pod exec output: / STEP: Waiting for container to restart May 1 00:24:02.480: INFO: Container dapi-container, restarts: 0 May 1 00:24:12.484: INFO: Container dapi-container, restarts: 0 May 1 00:24:22.484: INFO: Container dapi-container, restarts: 0 May 1 00:24:32.484: INFO: Container dapi-container, restarts: 0 May 1 00:24:42.484: INFO: Container dapi-container, restarts: 1 May 1 00:24:42.484: INFO: Container has restart count: 1 STEP: Rewriting the file May 1 00:24:42.487: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-8727 PodName:var-expansion-46b1443e-207c-4d0c-b98e-2a6994d33ff8 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:24:42.487: INFO: >>> kubeConfig: /root/.kube/config I0501 00:24:42.523185 7 log.go:172] (0xc0027aa2c0) (0xc002385540) Create stream I0501 00:24:42.523214 7 log.go:172] (0xc0027aa2c0) (0xc002385540) Stream added, broadcasting: 1 I0501 00:24:42.526105 7 log.go:172] (0xc0027aa2c0) Reply frame received for 1 I0501 00:24:42.526131 7 log.go:172] (0xc0027aa2c0) (0xc000d445a0) Create stream I0501 00:24:42.526140 7 log.go:172] (0xc0027aa2c0) (0xc000d445a0) Stream added, broadcasting: 3 I0501 00:24:42.526996 7 log.go:172] (0xc0027aa2c0) Reply frame received for 3 I0501 00:24:42.527029 7 log.go:172] (0xc0027aa2c0) (0xc000d45180) Create stream I0501 00:24:42.527045 7 log.go:172] (0xc0027aa2c0) (0xc000d45180) Stream added, broadcasting: 5 I0501 00:24:42.527786 7 log.go:172] (0xc0027aa2c0) Reply frame received for 5 I0501 00:24:42.586680 7 log.go:172] (0xc0027aa2c0) Data frame received for 3 I0501 00:24:42.586730 7 log.go:172] (0xc000d445a0) (3) Data frame handling I0501 00:24:42.586770 7 log.go:172] (0xc0027aa2c0) Data frame received for 5 I0501 00:24:42.586804 7 log.go:172] (0xc000d45180) (5) Data frame handling I0501 00:24:42.588272 7 log.go:172] (0xc0027aa2c0) Data frame received for 1 I0501 00:24:42.588295 7 log.go:172] (0xc002385540) (1) Data frame handling I0501 00:24:42.588305 7 log.go:172] (0xc002385540) (1) Data frame sent I0501 00:24:42.588315 7 log.go:172] (0xc0027aa2c0) (0xc002385540) Stream removed, broadcasting: 1 I0501 00:24:42.588335 7 log.go:172] (0xc0027aa2c0) Go away received I0501 00:24:42.588432 7 log.go:172] (0xc0027aa2c0) (0xc002385540) Stream removed, broadcasting: 1 I0501 00:24:42.588455 7 log.go:172] (0xc0027aa2c0) (0xc000d445a0) Stream removed, broadcasting: 3 I0501 00:24:42.588475 7 log.go:172] (0xc0027aa2c0) (0xc000d45180) Stream removed, broadcasting: 5 May 1 00:24:42.588: INFO: Pod exec output: STEP: Waiting for container to stop restarting May 1 00:25:16.595: INFO: Container has restart count: 2 May 1 00:26:18.596: INFO: Container restart has stabilized STEP: test for subpath mounted with old value May 1 00:26:18.599: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-8727 PodName:var-expansion-46b1443e-207c-4d0c-b98e-2a6994d33ff8 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:26:18.599: INFO: >>> kubeConfig: /root/.kube/config I0501 00:26:18.632980 7 log.go:172] (0xc002b87290) (0xc002087900) Create stream I0501 00:26:18.633012 7 log.go:172] (0xc002b87290) (0xc002087900) Stream added, broadcasting: 1 I0501 00:26:18.635249 7 log.go:172] (0xc002b87290) Reply frame received for 1 I0501 00:26:18.635294 7 log.go:172] (0xc002b87290) (0xc001f86dc0) Create stream I0501 00:26:18.635309 7 log.go:172] (0xc002b87290) (0xc001f86dc0) Stream added, broadcasting: 3 I0501 00:26:18.636298 7 log.go:172] (0xc002b87290) Reply frame received for 3 I0501 00:26:18.636345 7 log.go:172] (0xc002b87290) (0xc0020879a0) Create stream I0501 00:26:18.636364 7 log.go:172] (0xc002b87290) (0xc0020879a0) Stream added, broadcasting: 5 I0501 00:26:18.637736 7 log.go:172] (0xc002b87290) Reply frame received for 5 I0501 00:26:18.722389 7 log.go:172] (0xc002b87290) Data frame received for 5 I0501 00:26:18.722418 7 log.go:172] (0xc0020879a0) (5) Data frame handling I0501 00:26:18.722434 7 log.go:172] (0xc002b87290) Data frame received for 3 I0501 00:26:18.722446 7 log.go:172] (0xc001f86dc0) (3) Data frame handling I0501 00:26:18.723839 7 log.go:172] (0xc002b87290) Data frame received for 1 I0501 00:26:18.723862 7 log.go:172] (0xc002087900) (1) Data frame handling I0501 00:26:18.723888 7 log.go:172] (0xc002087900) (1) Data frame sent I0501 00:26:18.723987 7 log.go:172] (0xc002b87290) (0xc002087900) Stream removed, broadcasting: 1 I0501 00:26:18.724006 7 log.go:172] (0xc002b87290) Go away received I0501 00:26:18.724130 7 log.go:172] (0xc002b87290) (0xc002087900) Stream removed, broadcasting: 1 I0501 00:26:18.724156 7 log.go:172] (0xc002b87290) (0xc001f86dc0) Stream removed, broadcasting: 3 I0501 00:26:18.724170 7 log.go:172] (0xc002b87290) (0xc0020879a0) Stream removed, broadcasting: 5 May 1 00:26:18.727: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-8727 PodName:var-expansion-46b1443e-207c-4d0c-b98e-2a6994d33ff8 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:26:18.727: INFO: >>> kubeConfig: /root/.kube/config I0501 00:26:18.752124 7 log.go:172] (0xc002f222c0) (0xc0025775e0) Create stream I0501 00:26:18.752145 7 log.go:172] (0xc002f222c0) (0xc0025775e0) Stream added, broadcasting: 1 I0501 00:26:18.754136 7 log.go:172] (0xc002f222c0) Reply frame received for 1 I0501 00:26:18.754177 7 log.go:172] (0xc002f222c0) (0xc0011db900) Create stream I0501 00:26:18.754197 7 log.go:172] (0xc002f222c0) (0xc0011db900) Stream added, broadcasting: 3 I0501 00:26:18.755242 7 log.go:172] (0xc002f222c0) Reply frame received for 3 I0501 00:26:18.755291 7 log.go:172] (0xc002f222c0) (0xc002577680) Create stream I0501 00:26:18.755314 7 log.go:172] (0xc002f222c0) (0xc002577680) Stream added, broadcasting: 5 I0501 00:26:18.756211 7 log.go:172] (0xc002f222c0) Reply frame received for 5 I0501 00:26:18.810621 7 log.go:172] (0xc002f222c0) Data frame received for 5 I0501 00:26:18.810641 7 log.go:172] (0xc002577680) (5) Data frame handling I0501 00:26:18.810680 7 log.go:172] (0xc002f222c0) Data frame received for 3 I0501 00:26:18.810703 7 log.go:172] (0xc0011db900) (3) Data frame handling I0501 00:26:18.811548 7 log.go:172] (0xc002f222c0) Data frame received for 1 I0501 00:26:18.811564 7 log.go:172] (0xc0025775e0) (1) Data frame handling I0501 00:26:18.811588 7 log.go:172] (0xc0025775e0) (1) Data frame sent I0501 00:26:18.811649 7 log.go:172] (0xc002f222c0) (0xc0025775e0) Stream removed, broadcasting: 1 I0501 00:26:18.811686 7 log.go:172] (0xc002f222c0) Go away received I0501 00:26:18.811711 7 log.go:172] (0xc002f222c0) (0xc0025775e0) Stream removed, broadcasting: 1 I0501 00:26:18.811722 7 log.go:172] (0xc002f222c0) (0xc0011db900) Stream removed, broadcasting: 3 I0501 00:26:18.811742 7 log.go:172] (0xc002f222c0) (0xc002577680) Stream removed, broadcasting: 5 May 1 00:26:18.811: INFO: Deleting pod "var-expansion-46b1443e-207c-4d0c-b98e-2a6994d33ff8" in namespace "var-expansion-8727" May 1 00:26:18.816: INFO: Wait up to 5m0s for pod "var-expansion-46b1443e-207c-4d0c-b98e-2a6994d33ff8" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:27:16.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8727" for this suite. • [SLOW TEST:225.305 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":290,"completed":117,"skipped":2016,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:27:17.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 1 00:27:17.099: INFO: Waiting up to 5m0s for pod "client-containers-eb9c2ec7-ff30-47ff-89b5-ee3656e52ff6" in namespace "containers-5769" to be "Succeeded or Failed" May 1 00:27:17.103: INFO: Pod "client-containers-eb9c2ec7-ff30-47ff-89b5-ee3656e52ff6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.593106ms May 1 00:27:19.107: INFO: Pod "client-containers-eb9c2ec7-ff30-47ff-89b5-ee3656e52ff6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007341953s May 1 00:27:21.110: INFO: Pod "client-containers-eb9c2ec7-ff30-47ff-89b5-ee3656e52ff6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010939353s May 1 00:27:23.113: INFO: Pod "client-containers-eb9c2ec7-ff30-47ff-89b5-ee3656e52ff6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013830751s STEP: Saw pod success May 1 00:27:23.113: INFO: Pod "client-containers-eb9c2ec7-ff30-47ff-89b5-ee3656e52ff6" satisfied condition "Succeeded or Failed" May 1 00:27:23.115: INFO: Trying to get logs from node latest-worker pod client-containers-eb9c2ec7-ff30-47ff-89b5-ee3656e52ff6 container test-container: STEP: delete the pod May 1 00:27:23.162: INFO: Waiting for pod client-containers-eb9c2ec7-ff30-47ff-89b5-ee3656e52ff6 to disappear May 1 00:27:23.169: INFO: Pod client-containers-eb9c2ec7-ff30-47ff-89b5-ee3656e52ff6 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:27:23.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5769" for this suite. • [SLOW TEST:6.174 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":290,"completed":118,"skipped":2025,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:27:23.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 1 00:27:28.342: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:27:28.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4941" for this suite. • [SLOW TEST:5.401 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":290,"completed":119,"skipped":2030,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:27:28.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7178 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7178 I0501 00:27:28.823185 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7178, replica count: 2 I0501 00:27:31.873573 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 00:27:34.873814 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 1 00:27:34.873: INFO: Creating new exec pod May 1 00:27:42.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7178 execpodmn9hn -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 1 00:27:44.950: INFO: stderr: "I0501 00:27:44.868362 1223 log.go:172] (0xc00003a580) (0xc000662aa0) Create stream\nI0501 00:27:44.868389 1223 log.go:172] (0xc00003a580) (0xc000662aa0) Stream added, broadcasting: 1\nI0501 00:27:44.870752 1223 log.go:172] (0xc00003a580) Reply frame received for 1\nI0501 00:27:44.870801 1223 log.go:172] (0xc00003a580) (0xc0005fed20) Create stream\nI0501 00:27:44.870818 1223 log.go:172] (0xc00003a580) (0xc0005fed20) Stream added, broadcasting: 3\nI0501 00:27:44.871630 1223 log.go:172] (0xc00003a580) Reply frame received for 3\nI0501 00:27:44.871684 1223 log.go:172] (0xc00003a580) (0xc00057a5a0) Create stream\nI0501 00:27:44.871710 1223 log.go:172] (0xc00003a580) (0xc00057a5a0) Stream added, broadcasting: 5\nI0501 00:27:44.872555 1223 log.go:172] (0xc00003a580) Reply frame received for 5\nI0501 00:27:44.943289 1223 log.go:172] (0xc00003a580) Data frame received for 5\nI0501 00:27:44.943324 1223 log.go:172] (0xc00057a5a0) (5) Data frame handling\nI0501 00:27:44.943339 1223 log.go:172] (0xc00057a5a0) (5) Data frame sent\nI0501 00:27:44.943346 1223 log.go:172] (0xc00003a580) Data frame received for 5\nI0501 00:27:44.943352 1223 log.go:172] (0xc00057a5a0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0501 00:27:44.943371 1223 log.go:172] (0xc00057a5a0) (5) Data frame sent\nI0501 00:27:44.943858 1223 log.go:172] (0xc00003a580) Data frame received for 5\nI0501 00:27:44.943911 1223 log.go:172] (0xc00057a5a0) (5) Data frame handling\nI0501 00:27:44.943979 1223 log.go:172] (0xc00003a580) Data frame received for 3\nI0501 00:27:44.944011 1223 log.go:172] (0xc0005fed20) (3) Data frame handling\nI0501 00:27:44.945324 1223 log.go:172] (0xc00003a580) Data frame received for 1\nI0501 00:27:44.945345 1223 log.go:172] (0xc000662aa0) (1) Data frame handling\nI0501 00:27:44.945365 1223 log.go:172] (0xc000662aa0) (1) Data frame sent\nI0501 00:27:44.945541 1223 log.go:172] (0xc00003a580) (0xc000662aa0) Stream removed, broadcasting: 1\nI0501 00:27:44.945604 1223 log.go:172] (0xc00003a580) Go away received\nI0501 00:27:44.945944 1223 log.go:172] (0xc00003a580) (0xc000662aa0) Stream removed, broadcasting: 1\nI0501 00:27:44.945967 1223 log.go:172] (0xc00003a580) (0xc0005fed20) Stream removed, broadcasting: 3\nI0501 00:27:44.945979 1223 log.go:172] (0xc00003a580) (0xc00057a5a0) Stream removed, broadcasting: 5\n" May 1 00:27:44.950: INFO: stdout: "" May 1 00:27:44.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7178 execpodmn9hn -- /bin/sh -x -c nc -zv -t -w 2 10.108.155.165 80' May 1 00:27:45.147: INFO: stderr: "I0501 00:27:45.074018 1253 log.go:172] (0xc000a951e0) (0xc00070ee60) Create stream\nI0501 00:27:45.074078 1253 log.go:172] (0xc000a951e0) (0xc00070ee60) Stream added, broadcasting: 1\nI0501 00:27:45.076769 1253 log.go:172] (0xc000a951e0) Reply frame received for 1\nI0501 00:27:45.076821 1253 log.go:172] (0xc000a951e0) (0xc00070f400) Create stream\nI0501 00:27:45.076851 1253 log.go:172] (0xc000a951e0) (0xc00070f400) Stream added, broadcasting: 3\nI0501 00:27:45.078020 1253 log.go:172] (0xc000a951e0) Reply frame received for 3\nI0501 00:27:45.078056 1253 log.go:172] (0xc000a951e0) (0xc00070fe00) Create stream\nI0501 00:27:45.078080 1253 log.go:172] (0xc000a951e0) (0xc00070fe00) Stream added, broadcasting: 5\nI0501 00:27:45.079063 1253 log.go:172] (0xc000a951e0) Reply frame received for 5\nI0501 00:27:45.141600 1253 log.go:172] (0xc000a951e0) Data frame received for 5\nI0501 00:27:45.141633 1253 log.go:172] (0xc00070fe00) (5) Data frame handling\nI0501 00:27:45.141646 1253 log.go:172] (0xc00070fe00) (5) Data frame sent\nI0501 00:27:45.141657 1253 log.go:172] (0xc000a951e0) Data frame received for 5\nI0501 00:27:45.141665 1253 log.go:172] (0xc00070fe00) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.155.165 80\nConnection to 10.108.155.165 80 port [tcp/http] succeeded!\nI0501 00:27:45.141704 1253 log.go:172] (0xc000a951e0) Data frame received for 3\nI0501 00:27:45.141733 1253 log.go:172] (0xc00070f400) (3) Data frame handling\nI0501 00:27:45.143083 1253 log.go:172] (0xc000a951e0) Data frame received for 1\nI0501 00:27:45.143100 1253 log.go:172] (0xc00070ee60) (1) Data frame handling\nI0501 00:27:45.143116 1253 log.go:172] (0xc00070ee60) (1) Data frame sent\nI0501 00:27:45.143131 1253 log.go:172] (0xc000a951e0) (0xc00070ee60) Stream removed, broadcasting: 1\nI0501 00:27:45.143144 1253 log.go:172] (0xc000a951e0) Go away received\nI0501 00:27:45.143614 1253 log.go:172] (0xc000a951e0) (0xc00070ee60) Stream removed, broadcasting: 1\nI0501 00:27:45.143641 1253 log.go:172] (0xc000a951e0) (0xc00070f400) Stream removed, broadcasting: 3\nI0501 00:27:45.143652 1253 log.go:172] (0xc000a951e0) (0xc00070fe00) Stream removed, broadcasting: 5\n" May 1 00:27:45.147: INFO: stdout: "" May 1 00:27:45.147: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:27:45.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7178" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:16.663 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":290,"completed":120,"skipped":2039,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:27:45.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 1 00:27:45.314: INFO: Waiting up to 5m0s for pod "pod-ab67144f-8dfa-4ab5-a7bd-10f5e4b3c9cc" in namespace "emptydir-7384" to be "Succeeded or Failed" May 1 00:27:45.324: INFO: Pod "pod-ab67144f-8dfa-4ab5-a7bd-10f5e4b3c9cc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.361756ms May 1 00:27:47.433: INFO: Pod "pod-ab67144f-8dfa-4ab5-a7bd-10f5e4b3c9cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118972335s May 1 00:27:49.445: INFO: Pod "pod-ab67144f-8dfa-4ab5-a7bd-10f5e4b3c9cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130767599s May 1 00:27:51.768: INFO: Pod "pod-ab67144f-8dfa-4ab5-a7bd-10f5e4b3c9cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.453953208s STEP: Saw pod success May 1 00:27:51.768: INFO: Pod "pod-ab67144f-8dfa-4ab5-a7bd-10f5e4b3c9cc" satisfied condition "Succeeded or Failed" May 1 00:27:51.773: INFO: Trying to get logs from node latest-worker2 pod pod-ab67144f-8dfa-4ab5-a7bd-10f5e4b3c9cc container test-container: STEP: delete the pod May 1 00:27:52.231: INFO: Waiting for pod pod-ab67144f-8dfa-4ab5-a7bd-10f5e4b3c9cc to disappear May 1 00:27:52.236: INFO: Pod pod-ab67144f-8dfa-4ab5-a7bd-10f5e4b3c9cc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:27:52.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7384" for this suite. • [SLOW TEST:7.130 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":121,"skipped":2044,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:27:52.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:28:08.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1627" for this suite. • [SLOW TEST:16.289 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":290,"completed":122,"skipped":2048,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:28:08.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-55ee947b-3f39-4b33-9789-06b60e65b4aa STEP: Creating a pod to test consume configMaps May 1 00:28:08.758: INFO: Waiting up to 5m0s for pod "pod-configmaps-6171eaad-8881-411c-933f-b60668b44749" in namespace "configmap-156" to be "Succeeded or Failed" May 1 00:28:08.762: INFO: Pod "pod-configmaps-6171eaad-8881-411c-933f-b60668b44749": Phase="Pending", Reason="", readiness=false. Elapsed: 4.382291ms May 1 00:28:10.978: INFO: Pod "pod-configmaps-6171eaad-8881-411c-933f-b60668b44749": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220126772s May 1 00:28:13.062: INFO: Pod "pod-configmaps-6171eaad-8881-411c-933f-b60668b44749": Phase="Pending", Reason="", readiness=false. Elapsed: 4.304336021s May 1 00:28:15.441: INFO: Pod "pod-configmaps-6171eaad-8881-411c-933f-b60668b44749": Phase="Pending", Reason="", readiness=false. Elapsed: 6.68286255s May 1 00:28:17.444: INFO: Pod "pod-configmaps-6171eaad-8881-411c-933f-b60668b44749": Phase="Pending", Reason="", readiness=false. Elapsed: 8.686538819s May 1 00:28:19.449: INFO: Pod "pod-configmaps-6171eaad-8881-411c-933f-b60668b44749": Phase="Pending", Reason="", readiness=false. Elapsed: 10.69143572s May 1 00:28:21.481: INFO: Pod "pod-configmaps-6171eaad-8881-411c-933f-b60668b44749": Phase="Pending", Reason="", readiness=false. Elapsed: 12.723463851s May 1 00:28:23.484: INFO: Pod "pod-configmaps-6171eaad-8881-411c-933f-b60668b44749": Phase="Pending", Reason="", readiness=false. Elapsed: 14.72661756s May 1 00:28:25.488: INFO: Pod "pod-configmaps-6171eaad-8881-411c-933f-b60668b44749": Phase="Running", Reason="", readiness=true. Elapsed: 16.729942132s May 1 00:28:27.491: INFO: Pod "pod-configmaps-6171eaad-8881-411c-933f-b60668b44749": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.733645039s STEP: Saw pod success May 1 00:28:27.491: INFO: Pod "pod-configmaps-6171eaad-8881-411c-933f-b60668b44749" satisfied condition "Succeeded or Failed" May 1 00:28:27.494: INFO: Trying to get logs from node latest-worker pod pod-configmaps-6171eaad-8881-411c-933f-b60668b44749 container configmap-volume-test: STEP: delete the pod May 1 00:28:27.607: INFO: Waiting for pod pod-configmaps-6171eaad-8881-411c-933f-b60668b44749 to disappear May 1 00:28:27.661: INFO: Pod pod-configmaps-6171eaad-8881-411c-933f-b60668b44749 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:28:27.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-156" for this suite. • [SLOW TEST:19.008 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":290,"completed":123,"skipped":2100,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:28:27.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-1049/configmap-test-3508d268-2130-46ca-bd97-67299ccbabb4 STEP: Creating a pod to test consume configMaps May 1 00:28:27.820: INFO: Waiting up to 5m0s for pod "pod-configmaps-5687dfcd-1a1b-4c7e-8f8d-e17d26778979" in namespace "configmap-1049" to be "Succeeded or Failed" May 1 00:28:27.836: INFO: Pod "pod-configmaps-5687dfcd-1a1b-4c7e-8f8d-e17d26778979": Phase="Pending", Reason="", readiness=false. Elapsed: 16.465163ms May 1 00:28:29.840: INFO: Pod "pod-configmaps-5687dfcd-1a1b-4c7e-8f8d-e17d26778979": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019991336s May 1 00:28:31.844: INFO: Pod "pod-configmaps-5687dfcd-1a1b-4c7e-8f8d-e17d26778979": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023746238s May 1 00:28:34.427: INFO: Pod "pod-configmaps-5687dfcd-1a1b-4c7e-8f8d-e17d26778979": Phase="Pending", Reason="", readiness=false. Elapsed: 6.607412069s May 1 00:28:37.140: INFO: Pod "pod-configmaps-5687dfcd-1a1b-4c7e-8f8d-e17d26778979": Phase="Pending", Reason="", readiness=false. Elapsed: 9.320306625s May 1 00:28:39.144: INFO: Pod "pod-configmaps-5687dfcd-1a1b-4c7e-8f8d-e17d26778979": Phase="Running", Reason="", readiness=true. Elapsed: 11.324503214s May 1 00:28:41.148: INFO: Pod "pod-configmaps-5687dfcd-1a1b-4c7e-8f8d-e17d26778979": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.328120307s STEP: Saw pod success May 1 00:28:41.148: INFO: Pod "pod-configmaps-5687dfcd-1a1b-4c7e-8f8d-e17d26778979" satisfied condition "Succeeded or Failed" May 1 00:28:41.151: INFO: Trying to get logs from node latest-worker pod pod-configmaps-5687dfcd-1a1b-4c7e-8f8d-e17d26778979 container env-test: STEP: delete the pod May 1 00:28:41.177: INFO: Waiting for pod pod-configmaps-5687dfcd-1a1b-4c7e-8f8d-e17d26778979 to disappear May 1 00:28:41.216: INFO: Pod pod-configmaps-5687dfcd-1a1b-4c7e-8f8d-e17d26778979 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:28:41.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1049" for this suite. • [SLOW TEST:13.554 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":290,"completed":124,"skipped":2113,"failed":0} SS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:28:41.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 1 00:28:53.471: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:28:53.471: INFO: >>> kubeConfig: /root/.kube/config I0501 00:28:53.494856 7 log.go:172] (0xc002eb84d0) (0xc002577360) Create stream I0501 00:28:53.494872 7 log.go:172] (0xc002eb84d0) (0xc002577360) Stream added, broadcasting: 1 I0501 00:28:53.496190 7 log.go:172] (0xc002eb84d0) Reply frame received for 1 I0501 00:28:53.496232 7 log.go:172] (0xc002eb84d0) (0xc000fceaa0) Create stream I0501 00:28:53.496250 7 log.go:172] (0xc002eb84d0) (0xc000fceaa0) Stream added, broadcasting: 3 I0501 00:28:53.496929 7 log.go:172] (0xc002eb84d0) Reply frame received for 3 I0501 00:28:53.496946 7 log.go:172] (0xc002eb84d0) (0xc000fcec80) Create stream I0501 00:28:53.496952 7 log.go:172] (0xc002eb84d0) (0xc000fcec80) Stream added, broadcasting: 5 I0501 00:28:53.497623 7 log.go:172] (0xc002eb84d0) Reply frame received for 5 I0501 00:28:53.571615 7 log.go:172] (0xc002eb84d0) Data frame received for 5 I0501 00:28:53.571638 7 log.go:172] (0xc000fcec80) (5) Data frame handling I0501 00:28:53.571676 7 log.go:172] (0xc002eb84d0) Data frame received for 3 I0501 00:28:53.571704 7 log.go:172] (0xc000fceaa0) (3) Data frame handling I0501 00:28:53.571725 7 log.go:172] (0xc000fceaa0) (3) Data frame sent I0501 00:28:53.571741 7 log.go:172] (0xc002eb84d0) Data frame received for 3 I0501 00:28:53.571751 7 log.go:172] (0xc000fceaa0) (3) Data frame handling I0501 00:28:53.573095 7 log.go:172] (0xc002eb84d0) Data frame received for 1 I0501 00:28:53.573306 7 log.go:172] (0xc002577360) (1) Data frame handling I0501 00:28:53.573329 7 log.go:172] (0xc002577360) (1) Data frame sent I0501 00:28:53.573343 7 log.go:172] (0xc002eb84d0) (0xc002577360) Stream removed, broadcasting: 1 I0501 00:28:53.573363 7 log.go:172] (0xc002eb84d0) Go away received I0501 00:28:53.573633 7 log.go:172] (0xc002eb84d0) (0xc002577360) Stream removed, broadcasting: 1 I0501 00:28:53.573660 7 log.go:172] (0xc002eb84d0) (0xc000fceaa0) Stream removed, broadcasting: 3 I0501 00:28:53.573681 7 log.go:172] (0xc002eb84d0) (0xc000fcec80) Stream removed, broadcasting: 5 May 1 00:28:53.573: INFO: Exec stderr: "" May 1 00:28:53.573: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:28:53.573: INFO: >>> kubeConfig: /root/.kube/config I0501 00:28:53.655341 7 log.go:172] (0xc002f22210) (0xc001248640) Create stream I0501 00:28:53.655374 7 log.go:172] (0xc002f22210) (0xc001248640) Stream added, broadcasting: 1 I0501 00:28:53.656882 7 log.go:172] (0xc002f22210) Reply frame received for 1 I0501 00:28:53.656914 7 log.go:172] (0xc002f22210) (0xc001f87c20) Create stream I0501 00:28:53.656926 7 log.go:172] (0xc002f22210) (0xc001f87c20) Stream added, broadcasting: 3 I0501 00:28:53.657820 7 log.go:172] (0xc002f22210) Reply frame received for 3 I0501 00:28:53.657838 7 log.go:172] (0xc002f22210) (0xc001248780) Create stream I0501 00:28:53.657843 7 log.go:172] (0xc002f22210) (0xc001248780) Stream added, broadcasting: 5 I0501 00:28:53.658513 7 log.go:172] (0xc002f22210) Reply frame received for 5 I0501 00:28:53.726324 7 log.go:172] (0xc002f22210) Data frame received for 5 I0501 00:28:53.726349 7 log.go:172] (0xc001248780) (5) Data frame handling I0501 00:28:53.726384 7 log.go:172] (0xc002f22210) Data frame received for 3 I0501 00:28:53.726411 7 log.go:172] (0xc001f87c20) (3) Data frame handling I0501 00:28:53.726429 7 log.go:172] (0xc001f87c20) (3) Data frame sent I0501 00:28:53.726442 7 log.go:172] (0xc002f22210) Data frame received for 3 I0501 00:28:53.726458 7 log.go:172] (0xc001f87c20) (3) Data frame handling I0501 00:28:53.727647 7 log.go:172] (0xc002f22210) Data frame received for 1 I0501 00:28:53.727673 7 log.go:172] (0xc001248640) (1) Data frame handling I0501 00:28:53.727689 7 log.go:172] (0xc001248640) (1) Data frame sent I0501 00:28:53.727723 7 log.go:172] (0xc002f22210) (0xc001248640) Stream removed, broadcasting: 1 I0501 00:28:53.727804 7 log.go:172] (0xc002f22210) Go away received I0501 00:28:53.727846 7 log.go:172] (0xc002f22210) (0xc001248640) Stream removed, broadcasting: 1 I0501 00:28:53.727886 7 log.go:172] (0xc002f22210) (0xc001f87c20) Stream removed, broadcasting: 3 I0501 00:28:53.727915 7 log.go:172] (0xc002f22210) (0xc001248780) Stream removed, broadcasting: 5 May 1 00:28:53.727: INFO: Exec stderr: "" May 1 00:28:53.727: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:28:53.728: INFO: >>> kubeConfig: /root/.kube/config I0501 00:28:53.754368 7 log.go:172] (0xc002f22840) (0xc001248aa0) Create stream I0501 00:28:53.754388 7 log.go:172] (0xc002f22840) (0xc001248aa0) Stream added, broadcasting: 1 I0501 00:28:53.755753 7 log.go:172] (0xc002f22840) Reply frame received for 1 I0501 00:28:53.755782 7 log.go:172] (0xc002f22840) (0xc002577400) Create stream I0501 00:28:53.755794 7 log.go:172] (0xc002f22840) (0xc002577400) Stream added, broadcasting: 3 I0501 00:28:53.756535 7 log.go:172] (0xc002f22840) Reply frame received for 3 I0501 00:28:53.756549 7 log.go:172] (0xc002f22840) (0xc001248e60) Create stream I0501 00:28:53.756554 7 log.go:172] (0xc002f22840) (0xc001248e60) Stream added, broadcasting: 5 I0501 00:28:53.757351 7 log.go:172] (0xc002f22840) Reply frame received for 5 I0501 00:28:53.818211 7 log.go:172] (0xc002f22840) Data frame received for 3 I0501 00:28:53.818258 7 log.go:172] (0xc002577400) (3) Data frame handling I0501 00:28:53.818283 7 log.go:172] (0xc002577400) (3) Data frame sent I0501 00:28:53.818299 7 log.go:172] (0xc002f22840) Data frame received for 3 I0501 00:28:53.818309 7 log.go:172] (0xc002577400) (3) Data frame handling I0501 00:28:53.818356 7 log.go:172] (0xc002f22840) Data frame received for 5 I0501 00:28:53.818395 7 log.go:172] (0xc001248e60) (5) Data frame handling I0501 00:28:53.819504 7 log.go:172] (0xc002f22840) Data frame received for 1 I0501 00:28:53.819538 7 log.go:172] (0xc001248aa0) (1) Data frame handling I0501 00:28:53.819567 7 log.go:172] (0xc001248aa0) (1) Data frame sent I0501 00:28:53.819588 7 log.go:172] (0xc002f22840) (0xc001248aa0) Stream removed, broadcasting: 1 I0501 00:28:53.819615 7 log.go:172] (0xc002f22840) Go away received I0501 00:28:53.819697 7 log.go:172] (0xc002f22840) (0xc001248aa0) Stream removed, broadcasting: 1 I0501 00:28:53.819716 7 log.go:172] (0xc002f22840) (0xc002577400) Stream removed, broadcasting: 3 I0501 00:28:53.819727 7 log.go:172] (0xc002f22840) (0xc001248e60) Stream removed, broadcasting: 5 May 1 00:28:53.819: INFO: Exec stderr: "" May 1 00:28:53.819: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:28:53.819: INFO: >>> kubeConfig: /root/.kube/config I0501 00:28:53.847476 7 log.go:172] (0xc0027aa2c0) (0xc000c0b7c0) Create stream I0501 00:28:53.847507 7 log.go:172] (0xc0027aa2c0) (0xc000c0b7c0) Stream added, broadcasting: 1 I0501 00:28:53.849271 7 log.go:172] (0xc0027aa2c0) Reply frame received for 1 I0501 00:28:53.849320 7 log.go:172] (0xc0027aa2c0) (0xc0012497c0) Create stream I0501 00:28:53.849331 7 log.go:172] (0xc0027aa2c0) (0xc0012497c0) Stream added, broadcasting: 3 I0501 00:28:53.850036 7 log.go:172] (0xc0027aa2c0) Reply frame received for 3 I0501 00:28:53.850085 7 log.go:172] (0xc0027aa2c0) (0xc000c0b900) Create stream I0501 00:28:53.850110 7 log.go:172] (0xc0027aa2c0) (0xc000c0b900) Stream added, broadcasting: 5 I0501 00:28:53.850816 7 log.go:172] (0xc0027aa2c0) Reply frame received for 5 I0501 00:28:53.903408 7 log.go:172] (0xc0027aa2c0) Data frame received for 5 I0501 00:28:53.903448 7 log.go:172] (0xc000c0b900) (5) Data frame handling I0501 00:28:53.903474 7 log.go:172] (0xc0027aa2c0) Data frame received for 3 I0501 00:28:53.903528 7 log.go:172] (0xc0012497c0) (3) Data frame handling I0501 00:28:53.903554 7 log.go:172] (0xc0012497c0) (3) Data frame sent I0501 00:28:53.903570 7 log.go:172] (0xc0027aa2c0) Data frame received for 3 I0501 00:28:53.903613 7 log.go:172] (0xc0012497c0) (3) Data frame handling I0501 00:28:53.904685 7 log.go:172] (0xc0027aa2c0) Data frame received for 1 I0501 00:28:53.904709 7 log.go:172] (0xc000c0b7c0) (1) Data frame handling I0501 00:28:53.904739 7 log.go:172] (0xc000c0b7c0) (1) Data frame sent I0501 00:28:53.904855 7 log.go:172] (0xc0027aa2c0) (0xc000c0b7c0) Stream removed, broadcasting: 1 I0501 00:28:53.904935 7 log.go:172] (0xc0027aa2c0) (0xc000c0b7c0) Stream removed, broadcasting: 1 I0501 00:28:53.904986 7 log.go:172] (0xc0027aa2c0) (0xc0012497c0) Stream removed, broadcasting: 3 I0501 00:28:53.905003 7 log.go:172] (0xc0027aa2c0) (0xc000c0b900) Stream removed, broadcasting: 5 May 1 00:28:53.905: INFO: Exec stderr: "" I0501 00:28:53.905023 7 log.go:172] (0xc0027aa2c0) Go away received STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 1 00:28:53.905: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:28:53.905: INFO: >>> kubeConfig: /root/.kube/config I0501 00:28:53.933662 7 log.go:172] (0xc002eb8bb0) (0xc002577720) Create stream I0501 00:28:53.933678 7 log.go:172] (0xc002eb8bb0) (0xc002577720) Stream added, broadcasting: 1 I0501 00:28:53.935105 7 log.go:172] (0xc002eb8bb0) Reply frame received for 1 I0501 00:28:53.935141 7 log.go:172] (0xc002eb8bb0) (0xc000fcf220) Create stream I0501 00:28:53.935153 7 log.go:172] (0xc002eb8bb0) (0xc000fcf220) Stream added, broadcasting: 3 I0501 00:28:53.935929 7 log.go:172] (0xc002eb8bb0) Reply frame received for 3 I0501 00:28:53.935964 7 log.go:172] (0xc002eb8bb0) (0xc0025777c0) Create stream I0501 00:28:53.935972 7 log.go:172] (0xc002eb8bb0) (0xc0025777c0) Stream added, broadcasting: 5 I0501 00:28:53.936767 7 log.go:172] (0xc002eb8bb0) Reply frame received for 5 I0501 00:28:53.982924 7 log.go:172] (0xc002eb8bb0) Data frame received for 3 I0501 00:28:53.982954 7 log.go:172] (0xc000fcf220) (3) Data frame handling I0501 00:28:53.982977 7 log.go:172] (0xc000fcf220) (3) Data frame sent I0501 00:28:53.982992 7 log.go:172] (0xc002eb8bb0) Data frame received for 3 I0501 00:28:53.983002 7 log.go:172] (0xc000fcf220) (3) Data frame handling I0501 00:28:53.983078 7 log.go:172] (0xc002eb8bb0) Data frame received for 5 I0501 00:28:53.983106 7 log.go:172] (0xc0025777c0) (5) Data frame handling I0501 00:28:53.984243 7 log.go:172] (0xc002eb8bb0) Data frame received for 1 I0501 00:28:53.984265 7 log.go:172] (0xc002577720) (1) Data frame handling I0501 00:28:53.984317 7 log.go:172] (0xc002577720) (1) Data frame sent I0501 00:28:53.984343 7 log.go:172] (0xc002eb8bb0) (0xc002577720) Stream removed, broadcasting: 1 I0501 00:28:53.984392 7 log.go:172] (0xc002eb8bb0) (0xc002577720) Stream removed, broadcasting: 1 I0501 00:28:53.984401 7 log.go:172] (0xc002eb8bb0) (0xc000fcf220) Stream removed, broadcasting: 3 I0501 00:28:53.984527 7 log.go:172] (0xc002eb8bb0) (0xc0025777c0) Stream removed, broadcasting: 5 I0501 00:28:53.984603 7 log.go:172] (0xc002eb8bb0) Go away received May 1 00:28:53.984: INFO: Exec stderr: "" May 1 00:28:53.984: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:28:53.984: INFO: >>> kubeConfig: /root/.kube/config I0501 00:28:54.018653 7 log.go:172] (0xc0027aa9a0) (0xc000c0bd60) Create stream I0501 00:28:54.018678 7 log.go:172] (0xc0027aa9a0) (0xc000c0bd60) Stream added, broadcasting: 1 I0501 00:28:54.020415 7 log.go:172] (0xc0027aa9a0) Reply frame received for 1 I0501 00:28:54.020446 7 log.go:172] (0xc0027aa9a0) (0xc000c0be00) Create stream I0501 00:28:54.020456 7 log.go:172] (0xc0027aa9a0) (0xc000c0be00) Stream added, broadcasting: 3 I0501 00:28:54.021218 7 log.go:172] (0xc0027aa9a0) Reply frame received for 3 I0501 00:28:54.021241 7 log.go:172] (0xc0027aa9a0) (0xc000be0500) Create stream I0501 00:28:54.021249 7 log.go:172] (0xc0027aa9a0) (0xc000be0500) Stream added, broadcasting: 5 I0501 00:28:54.021955 7 log.go:172] (0xc0027aa9a0) Reply frame received for 5 I0501 00:28:54.071072 7 log.go:172] (0xc0027aa9a0) Data frame received for 5 I0501 00:28:54.071092 7 log.go:172] (0xc000be0500) (5) Data frame handling I0501 00:28:54.071132 7 log.go:172] (0xc0027aa9a0) Data frame received for 3 I0501 00:28:54.071173 7 log.go:172] (0xc000c0be00) (3) Data frame handling I0501 00:28:54.071225 7 log.go:172] (0xc000c0be00) (3) Data frame sent I0501 00:28:54.071245 7 log.go:172] (0xc0027aa9a0) Data frame received for 3 I0501 00:28:54.071257 7 log.go:172] (0xc000c0be00) (3) Data frame handling I0501 00:28:54.072465 7 log.go:172] (0xc0027aa9a0) Data frame received for 1 I0501 00:28:54.072502 7 log.go:172] (0xc000c0bd60) (1) Data frame handling I0501 00:28:54.072529 7 log.go:172] (0xc000c0bd60) (1) Data frame sent I0501 00:28:54.072553 7 log.go:172] (0xc0027aa9a0) (0xc000c0bd60) Stream removed, broadcasting: 1 I0501 00:28:54.072580 7 log.go:172] (0xc0027aa9a0) Go away received I0501 00:28:54.072634 7 log.go:172] (0xc0027aa9a0) (0xc000c0bd60) Stream removed, broadcasting: 1 I0501 00:28:54.072644 7 log.go:172] (0xc0027aa9a0) (0xc000c0be00) Stream removed, broadcasting: 3 I0501 00:28:54.072648 7 log.go:172] (0xc0027aa9a0) (0xc000be0500) Stream removed, broadcasting: 5 May 1 00:28:54.072: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 1 00:28:54.072: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:28:54.072: INFO: >>> kubeConfig: /root/.kube/config I0501 00:28:54.102337 7 log.go:172] (0xc0027ab130) (0xc000be0dc0) Create stream I0501 00:28:54.102360 7 log.go:172] (0xc0027ab130) (0xc000be0dc0) Stream added, broadcasting: 1 I0501 00:28:54.104453 7 log.go:172] (0xc0027ab130) Reply frame received for 1 I0501 00:28:54.104496 7 log.go:172] (0xc0027ab130) (0xc001f87cc0) Create stream I0501 00:28:54.104510 7 log.go:172] (0xc0027ab130) (0xc001f87cc0) Stream added, broadcasting: 3 I0501 00:28:54.105600 7 log.go:172] (0xc0027ab130) Reply frame received for 3 I0501 00:28:54.105709 7 log.go:172] (0xc0027ab130) (0xc000fcf2c0) Create stream I0501 00:28:54.105720 7 log.go:172] (0xc0027ab130) (0xc000fcf2c0) Stream added, broadcasting: 5 I0501 00:28:54.106576 7 log.go:172] (0xc0027ab130) Reply frame received for 5 I0501 00:28:54.175694 7 log.go:172] (0xc0027ab130) Data frame received for 3 I0501 00:28:54.175747 7 log.go:172] (0xc001f87cc0) (3) Data frame handling I0501 00:28:54.175776 7 log.go:172] (0xc001f87cc0) (3) Data frame sent I0501 00:28:54.175798 7 log.go:172] (0xc0027ab130) Data frame received for 3 I0501 00:28:54.175831 7 log.go:172] (0xc0027ab130) Data frame received for 5 I0501 00:28:54.175919 7 log.go:172] (0xc000fcf2c0) (5) Data frame handling I0501 00:28:54.175957 7 log.go:172] (0xc001f87cc0) (3) Data frame handling I0501 00:28:54.177437 7 log.go:172] (0xc0027ab130) Data frame received for 1 I0501 00:28:54.177460 7 log.go:172] (0xc000be0dc0) (1) Data frame handling I0501 00:28:54.177505 7 log.go:172] (0xc000be0dc0) (1) Data frame sent I0501 00:28:54.177537 7 log.go:172] (0xc0027ab130) (0xc000be0dc0) Stream removed, broadcasting: 1 I0501 00:28:54.177582 7 log.go:172] (0xc0027ab130) Go away received I0501 00:28:54.177619 7 log.go:172] (0xc0027ab130) (0xc000be0dc0) Stream removed, broadcasting: 1 I0501 00:28:54.177633 7 log.go:172] (0xc0027ab130) (0xc001f87cc0) Stream removed, broadcasting: 3 I0501 00:28:54.177641 7 log.go:172] (0xc0027ab130) (0xc000fcf2c0) Stream removed, broadcasting: 5 May 1 00:28:54.177: INFO: Exec stderr: "" May 1 00:28:54.177: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:28:54.177: INFO: >>> kubeConfig: /root/.kube/config I0501 00:28:54.205806 7 log.go:172] (0xc002e0e840) (0xc0001a06e0) Create stream I0501 00:28:54.205828 7 log.go:172] (0xc002e0e840) (0xc0001a06e0) Stream added, broadcasting: 1 I0501 00:28:54.207432 7 log.go:172] (0xc002e0e840) Reply frame received for 1 I0501 00:28:54.207466 7 log.go:172] (0xc002e0e840) (0xc000fcf4a0) Create stream I0501 00:28:54.207478 7 log.go:172] (0xc002e0e840) (0xc000fcf4a0) Stream added, broadcasting: 3 I0501 00:28:54.208339 7 log.go:172] (0xc002e0e840) Reply frame received for 3 I0501 00:28:54.208369 7 log.go:172] (0xc002e0e840) (0xc000fcfa40) Create stream I0501 00:28:54.208381 7 log.go:172] (0xc002e0e840) (0xc000fcfa40) Stream added, broadcasting: 5 I0501 00:28:54.209089 7 log.go:172] (0xc002e0e840) Reply frame received for 5 I0501 00:28:54.274837 7 log.go:172] (0xc002e0e840) Data frame received for 5 I0501 00:28:54.274861 7 log.go:172] (0xc000fcfa40) (5) Data frame handling I0501 00:28:54.274879 7 log.go:172] (0xc002e0e840) Data frame received for 3 I0501 00:28:54.274935 7 log.go:172] (0xc000fcf4a0) (3) Data frame handling I0501 00:28:54.274974 7 log.go:172] (0xc000fcf4a0) (3) Data frame sent I0501 00:28:54.274990 7 log.go:172] (0xc002e0e840) Data frame received for 3 I0501 00:28:54.274998 7 log.go:172] (0xc000fcf4a0) (3) Data frame handling I0501 00:28:54.275765 7 log.go:172] (0xc002e0e840) Data frame received for 1 I0501 00:28:54.275782 7 log.go:172] (0xc0001a06e0) (1) Data frame handling I0501 00:28:54.275793 7 log.go:172] (0xc0001a06e0) (1) Data frame sent I0501 00:28:54.275800 7 log.go:172] (0xc002e0e840) (0xc0001a06e0) Stream removed, broadcasting: 1 I0501 00:28:54.275807 7 log.go:172] (0xc002e0e840) Go away received I0501 00:28:54.275889 7 log.go:172] (0xc002e0e840) (0xc0001a06e0) Stream removed, broadcasting: 1 I0501 00:28:54.275900 7 log.go:172] (0xc002e0e840) (0xc000fcf4a0) Stream removed, broadcasting: 3 I0501 00:28:54.275905 7 log.go:172] (0xc002e0e840) (0xc000fcfa40) Stream removed, broadcasting: 5 May 1 00:28:54.275: INFO: Exec stderr: "" May 1 00:28:54.275: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:28:54.275: INFO: >>> kubeConfig: /root/.kube/config I0501 00:28:54.304574 7 log.go:172] (0xc00259b600) (0xc00028b860) Create stream I0501 00:28:54.304605 7 log.go:172] (0xc00259b600) (0xc00028b860) Stream added, broadcasting: 1 I0501 00:28:54.306794 7 log.go:172] (0xc00259b600) Reply frame received for 1 I0501 00:28:54.306823 7 log.go:172] (0xc00259b600) (0xc00028bae0) Create stream I0501 00:28:54.306834 7 log.go:172] (0xc00259b600) (0xc00028bae0) Stream added, broadcasting: 3 I0501 00:28:54.307785 7 log.go:172] (0xc00259b600) Reply frame received for 3 I0501 00:28:54.307815 7 log.go:172] (0xc00259b600) (0xc002577860) Create stream I0501 00:28:54.307827 7 log.go:172] (0xc00259b600) (0xc002577860) Stream added, broadcasting: 5 I0501 00:28:54.308682 7 log.go:172] (0xc00259b600) Reply frame received for 5 I0501 00:28:54.375542 7 log.go:172] (0xc00259b600) Data frame received for 3 I0501 00:28:54.375580 7 log.go:172] (0xc00028bae0) (3) Data frame handling I0501 00:28:54.375593 7 log.go:172] (0xc00028bae0) (3) Data frame sent I0501 00:28:54.375602 7 log.go:172] (0xc00259b600) Data frame received for 3 I0501 00:28:54.375612 7 log.go:172] (0xc00028bae0) (3) Data frame handling I0501 00:28:54.375652 7 log.go:172] (0xc00259b600) Data frame received for 5 I0501 00:28:54.375689 7 log.go:172] (0xc002577860) (5) Data frame handling I0501 00:28:54.376910 7 log.go:172] (0xc00259b600) Data frame received for 1 I0501 00:28:54.376926 7 log.go:172] (0xc00028b860) (1) Data frame handling I0501 00:28:54.376950 7 log.go:172] (0xc00028b860) (1) Data frame sent I0501 00:28:54.376962 7 log.go:172] (0xc00259b600) (0xc00028b860) Stream removed, broadcasting: 1 I0501 00:28:54.376973 7 log.go:172] (0xc00259b600) Go away received I0501 00:28:54.377058 7 log.go:172] (0xc00259b600) (0xc00028b860) Stream removed, broadcasting: 1 I0501 00:28:54.377095 7 log.go:172] (0xc00259b600) (0xc00028bae0) Stream removed, broadcasting: 3 I0501 00:28:54.377283 7 log.go:172] (0xc00259b600) (0xc002577860) Stream removed, broadcasting: 5 May 1 00:28:54.377: INFO: Exec stderr: "" May 1 00:28:54.377: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:28:54.377: INFO: >>> kubeConfig: /root/.kube/config I0501 00:28:54.405874 7 log.go:172] (0xc002eb91e0) (0xc002577c20) Create stream I0501 00:28:54.405892 7 log.go:172] (0xc002eb91e0) (0xc002577c20) Stream added, broadcasting: 1 I0501 00:28:54.407837 7 log.go:172] (0xc002eb91e0) Reply frame received for 1 I0501 00:28:54.407893 7 log.go:172] (0xc002eb91e0) (0xc00033d400) Create stream I0501 00:28:54.407916 7 log.go:172] (0xc002eb91e0) (0xc00033d400) Stream added, broadcasting: 3 I0501 00:28:54.408792 7 log.go:172] (0xc002eb91e0) Reply frame received for 3 I0501 00:28:54.408808 7 log.go:172] (0xc002eb91e0) (0xc000be14a0) Create stream I0501 00:28:54.408814 7 log.go:172] (0xc002eb91e0) (0xc000be14a0) Stream added, broadcasting: 5 I0501 00:28:54.409830 7 log.go:172] (0xc002eb91e0) Reply frame received for 5 I0501 00:28:54.466165 7 log.go:172] (0xc002eb91e0) Data frame received for 5 I0501 00:28:54.466220 7 log.go:172] (0xc000be14a0) (5) Data frame handling I0501 00:28:54.466242 7 log.go:172] (0xc002eb91e0) Data frame received for 3 I0501 00:28:54.466270 7 log.go:172] (0xc00033d400) (3) Data frame handling I0501 00:28:54.466302 7 log.go:172] (0xc00033d400) (3) Data frame sent I0501 00:28:54.466339 7 log.go:172] (0xc002eb91e0) Data frame received for 3 I0501 00:28:54.466363 7 log.go:172] (0xc00033d400) (3) Data frame handling I0501 00:28:54.467590 7 log.go:172] (0xc002eb91e0) Data frame received for 1 I0501 00:28:54.467609 7 log.go:172] (0xc002577c20) (1) Data frame handling I0501 00:28:54.467629 7 log.go:172] (0xc002577c20) (1) Data frame sent I0501 00:28:54.467651 7 log.go:172] (0xc002eb91e0) (0xc002577c20) Stream removed, broadcasting: 1 I0501 00:28:54.467766 7 log.go:172] (0xc002eb91e0) Go away received I0501 00:28:54.467820 7 log.go:172] (0xc002eb91e0) (0xc002577c20) Stream removed, broadcasting: 1 I0501 00:28:54.467874 7 log.go:172] (0xc002eb91e0) (0xc00033d400) Stream removed, broadcasting: 3 I0501 00:28:54.467889 7 log.go:172] (0xc002eb91e0) (0xc000be14a0) Stream removed, broadcasting: 5 May 1 00:28:54.467: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:28:54.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7242" for this suite. • [SLOW TEST:13.252 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":125,"skipped":2115,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:28:54.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-d3c5f48e-0ac0-45e0-ad0b-443e909bdd4f STEP: Creating a pod to test consume configMaps May 1 00:28:54.563: INFO: Waiting up to 5m0s for pod "pod-configmaps-992201d7-f544-46fc-90e1-53f04ac987bc" in namespace "configmap-6995" to be "Succeeded or Failed" May 1 00:28:54.576: INFO: Pod "pod-configmaps-992201d7-f544-46fc-90e1-53f04ac987bc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.126303ms May 1 00:28:56.580: INFO: Pod "pod-configmaps-992201d7-f544-46fc-90e1-53f04ac987bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01666242s May 1 00:28:58.584: INFO: Pod "pod-configmaps-992201d7-f544-46fc-90e1-53f04ac987bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020923298s STEP: Saw pod success May 1 00:28:58.584: INFO: Pod "pod-configmaps-992201d7-f544-46fc-90e1-53f04ac987bc" satisfied condition "Succeeded or Failed" May 1 00:28:58.587: INFO: Trying to get logs from node latest-worker pod pod-configmaps-992201d7-f544-46fc-90e1-53f04ac987bc container configmap-volume-test: STEP: delete the pod May 1 00:28:58.637: INFO: Waiting for pod pod-configmaps-992201d7-f544-46fc-90e1-53f04ac987bc to disappear May 1 00:28:58.672: INFO: Pod pod-configmaps-992201d7-f544-46fc-90e1-53f04ac987bc no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:28:58.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6995" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":290,"completed":126,"skipped":2141,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:28:58.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 1 00:28:58.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4939' May 1 00:28:58.992: INFO: stderr: "" May 1 00:28:58.992: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 00:28:58.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4939' May 1 00:28:59.158: INFO: stderr: "" May 1 00:28:59.158: INFO: stdout: "update-demo-nautilus-7cnhv update-demo-nautilus-fjczc " May 1 00:28:59.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7cnhv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4939' May 1 00:28:59.248: INFO: stderr: "" May 1 00:28:59.248: INFO: stdout: "" May 1 00:28:59.248: INFO: update-demo-nautilus-7cnhv is created but not running May 1 00:29:04.248: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4939' May 1 00:29:04.347: INFO: stderr: "" May 1 00:29:04.347: INFO: stdout: "update-demo-nautilus-7cnhv update-demo-nautilus-fjczc " May 1 00:29:04.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7cnhv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4939' May 1 00:29:04.430: INFO: stderr: "" May 1 00:29:04.430: INFO: stdout: "true" May 1 00:29:04.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7cnhv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4939' May 1 00:29:04.520: INFO: stderr: "" May 1 00:29:04.520: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 00:29:04.520: INFO: validating pod update-demo-nautilus-7cnhv May 1 00:29:04.524: INFO: got data: { "image": "nautilus.jpg" } May 1 00:29:04.524: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 00:29:04.524: INFO: update-demo-nautilus-7cnhv is verified up and running May 1 00:29:04.524: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fjczc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4939' May 1 00:29:04.645: INFO: stderr: "" May 1 00:29:04.645: INFO: stdout: "true" May 1 00:29:04.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fjczc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4939' May 1 00:29:04.740: INFO: stderr: "" May 1 00:29:04.740: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 00:29:04.740: INFO: validating pod update-demo-nautilus-fjczc May 1 00:29:04.750: INFO: got data: { "image": "nautilus.jpg" } May 1 00:29:04.750: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 00:29:04.750: INFO: update-demo-nautilus-fjczc is verified up and running STEP: using delete to clean up resources May 1 00:29:04.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4939' May 1 00:29:04.866: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 00:29:04.866: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 1 00:29:04.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4939' May 1 00:29:04.962: INFO: stderr: "No resources found in kubectl-4939 namespace.\n" May 1 00:29:04.962: INFO: stdout: "" May 1 00:29:04.962: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4939 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 1 00:29:05.062: INFO: stderr: "" May 1 00:29:05.062: INFO: stdout: "update-demo-nautilus-7cnhv\nupdate-demo-nautilus-fjczc\n" May 1 00:29:05.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4939' May 1 00:29:06.378: INFO: stderr: "No resources found in kubectl-4939 namespace.\n" May 1 00:29:06.378: INFO: stdout: "" May 1 00:29:06.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4939 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 1 00:29:06.569: INFO: stderr: "" May 1 00:29:06.569: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:29:06.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4939" for this suite. • [SLOW TEST:7.926 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":290,"completed":127,"skipped":2152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:29:06.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:29:06.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7414" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":290,"completed":128,"skipped":2194,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:29:06.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod May 1 00:29:07.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-3233 -- logs-generator --log-lines-total 100 --run-duration 20s' May 1 00:29:07.571: INFO: stderr: "" May 1 00:29:07.571: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 1 00:29:07.571: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 1 00:29:07.571: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3233" to be "running and ready, or succeeded" May 1 00:29:07.586: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 15.015925ms May 1 00:29:09.729: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157908927s May 1 00:29:11.734: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.163245282s May 1 00:29:11.734: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 1 00:29:11.735: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 1 00:29:11.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3233' May 1 00:29:11.839: INFO: stderr: "" May 1 00:29:11.839: INFO: stdout: "I0501 00:29:11.098569 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/nr7c 559\nI0501 00:29:11.298731 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/vc9n 235\nI0501 00:29:11.498733 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/4lx 547\nI0501 00:29:11.698687 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/zs2 298\n" STEP: limiting log lines May 1 00:29:11.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3233 --tail=1' May 1 00:29:11.961: INFO: stderr: "" May 1 00:29:11.961: INFO: stdout: "I0501 00:29:11.898699 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/zh6x 387\n" May 1 00:29:11.961: INFO: got output "I0501 00:29:11.898699 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/zh6x 387\n" STEP: limiting log bytes May 1 00:29:11.961: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3233 --limit-bytes=1' May 1 00:29:12.097: INFO: stderr: "" May 1 00:29:12.097: INFO: stdout: "I" May 1 00:29:12.097: INFO: got output "I" STEP: exposing timestamps May 1 00:29:12.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3233 --tail=1 --timestamps' May 1 00:29:12.219: INFO: stderr: "" May 1 00:29:12.219: INFO: stdout: "2020-05-01T00:29:12.098789106Z I0501 00:29:12.098679 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/n2l 309\n" May 1 00:29:12.219: INFO: got output "2020-05-01T00:29:12.098789106Z I0501 00:29:12.098679 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/n2l 309\n" STEP: restricting to a time range May 1 00:29:14.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3233 --since=1s' May 1 00:29:14.848: INFO: stderr: "" May 1 00:29:14.848: INFO: stdout: "I0501 00:29:13.898655 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/khtm 554\nI0501 00:29:14.098708 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/hdz 238\nI0501 00:29:14.298709 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/fqf 466\nI0501 00:29:14.498701 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/7kx5 261\nI0501 00:29:14.698746 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/7pw 424\n" May 1 00:29:14.848: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3233 --since=24h' May 1 00:29:14.967: INFO: stderr: "" May 1 00:29:14.967: INFO: stdout: "I0501 00:29:11.098569 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/nr7c 559\nI0501 00:29:11.298731 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/vc9n 235\nI0501 00:29:11.498733 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/4lx 547\nI0501 00:29:11.698687 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/zs2 298\nI0501 00:29:11.898699 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/zh6x 387\nI0501 00:29:12.098679 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/n2l 309\nI0501 00:29:12.298726 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/jq2 518\nI0501 00:29:12.498836 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/gld 378\nI0501 00:29:12.698697 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/2xnv 223\nI0501 00:29:12.898667 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/bkzf 221\nI0501 00:29:13.098698 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/q5g6 532\nI0501 00:29:13.298698 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/vwxt 287\nI0501 00:29:13.498776 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/gjc7 217\nI0501 00:29:13.698691 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/pwv7 346\nI0501 00:29:13.898655 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/khtm 554\nI0501 00:29:14.098708 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/hdz 238\nI0501 00:29:14.298709 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/fqf 466\nI0501 00:29:14.498701 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/7kx5 261\nI0501 00:29:14.698746 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/7pw 424\nI0501 00:29:14.899014 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/qjc 315\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 May 1 00:29:14.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3233' May 1 00:29:25.286: INFO: stderr: "" May 1 00:29:25.286: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:29:25.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3233" for this suite. • [SLOW TEST:18.459 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":290,"completed":129,"skipped":2195,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:29:25.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 1 00:29:25.380: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 1 00:29:25.402: INFO: Waiting for terminating namespaces to be deleted... May 1 00:29:25.404: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 1 00:29:25.407: INFO: test-pod from e2e-kubelet-etc-hosts-7242 started at 2020-05-01 00:28:41 +0000 UTC (3 container statuses recorded) May 1 00:29:25.407: INFO: Container busybox-1 ready: true, restart count 0 May 1 00:29:25.407: INFO: Container busybox-2 ready: true, restart count 0 May 1 00:29:25.407: INFO: Container busybox-3 ready: true, restart count 0 May 1 00:29:25.407: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 1 00:29:25.407: INFO: Container kindnet-cni ready: true, restart count 0 May 1 00:29:25.407: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 1 00:29:25.407: INFO: Container kube-proxy ready: true, restart count 0 May 1 00:29:25.407: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 1 00:29:25.412: INFO: test-host-network-pod from e2e-kubelet-etc-hosts-7242 started at 2020-05-01 00:28:49 +0000 UTC (2 container statuses recorded) May 1 00:29:25.412: INFO: Container busybox-1 ready: true, restart count 0 May 1 00:29:25.412: INFO: Container busybox-2 ready: true, restart count 0 May 1 00:29:25.412: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 1 00:29:25.412: INFO: Container kindnet-cni ready: true, restart count 0 May 1 00:29:25.412: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 1 00:29:25.412: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 May 1 00:29:25.499: INFO: Pod test-host-network-pod requesting resource cpu=0m on Node latest-worker2 May 1 00:29:25.499: INFO: Pod test-pod requesting resource cpu=0m on Node latest-worker May 1 00:29:25.499: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker May 1 00:29:25.499: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 May 1 00:29:25.499: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker May 1 00:29:25.499: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 1 00:29:25.499: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker May 1 00:29:25.506: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-05716fa5-dd94-44e3-bf08-1f11cc5d7b70.160ac0097adc7a05], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5522/filler-pod-05716fa5-dd94-44e3-bf08-1f11cc5d7b70 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-05716fa5-dd94-44e3-bf08-1f11cc5d7b70.160ac009d69e1807], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-05716fa5-dd94-44e3-bf08-1f11cc5d7b70.160ac00da257c1b1], Reason = [Created], Message = [Created container filler-pod-05716fa5-dd94-44e3-bf08-1f11cc5d7b70] STEP: Considering event: Type = [Normal], Name = [filler-pod-05716fa5-dd94-44e3-bf08-1f11cc5d7b70.160ac00db9081c12], Reason = [Started], Message = [Started container filler-pod-05716fa5-dd94-44e3-bf08-1f11cc5d7b70] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6059da2-a0b5-4620-85b7-728f1b4b78fe.160ac00978c594db], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5522/filler-pod-e6059da2-a0b5-4620-85b7-728f1b4b78fe to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6059da2-a0b5-4620-85b7-728f1b4b78fe.160ac009beb8f41b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6059da2-a0b5-4620-85b7-728f1b4b78fe.160ac00da2554091], Reason = [Created], Message = [Created container filler-pod-e6059da2-a0b5-4620-85b7-728f1b4b78fe] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6059da2-a0b5-4620-85b7-728f1b4b78fe.160ac00db9081c26], Reason = [Started], Message = [Started container filler-pod-e6059da2-a0b5-4620-85b7-728f1b4b78fe] STEP: Considering event: Type = [Warning], Name = [additional-pod.160ac00e2451c361], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:29:46.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5522" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:21.386 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":290,"completed":130,"skipped":2209,"failed":0} SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:29:46.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 1 00:29:59.346: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7888986e-6c9b-41b8-9614-e7198f22a68c" May 1 00:29:59.346: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7888986e-6c9b-41b8-9614-e7198f22a68c" in namespace "pods-5904" to be "terminated due to deadline exceeded" May 1 00:29:59.391: INFO: Pod "pod-update-activedeadlineseconds-7888986e-6c9b-41b8-9614-e7198f22a68c": Phase="Running", Reason="", readiness=true. Elapsed: 44.712844ms May 1 00:30:01.652: INFO: Pod "pod-update-activedeadlineseconds-7888986e-6c9b-41b8-9614-e7198f22a68c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.305849424s May 1 00:30:01.652: INFO: Pod "pod-update-activedeadlineseconds-7888986e-6c9b-41b8-9614-e7198f22a68c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:30:01.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5904" for this suite. • [SLOW TEST:15.206 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":290,"completed":131,"skipped":2211,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:30:01.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1343.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1343.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1343.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1343.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1343.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1343.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1343.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1343.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1343.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1343.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1343.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 255.130.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.130.255_udp@PTR;check="$$(dig +tcp +noall +answer +search 255.130.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.130.255_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1343.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1343.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1343.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1343.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1343.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1343.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1343.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1343.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1343.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1343.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1343.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 255.130.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.130.255_udp@PTR;check="$$(dig +tcp +noall +answer +search 255.130.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.130.255_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 00:30:17.084: INFO: Unable to read wheezy_udp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:17.087: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:17.090: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:17.092: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:17.111: INFO: Unable to read jessie_udp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:17.113: INFO: Unable to read jessie_tcp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:17.116: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:17.119: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:17.135: INFO: Lookups using dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7 failed for: [wheezy_udp@dns-test-service.dns-1343.svc.cluster.local wheezy_tcp@dns-test-service.dns-1343.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local jessie_udp@dns-test-service.dns-1343.svc.cluster.local jessie_tcp@dns-test-service.dns-1343.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local] May 1 00:30:22.140: INFO: Unable to read wheezy_udp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:22.144: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:22.147: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:22.151: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:22.169: INFO: Unable to read jessie_udp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:22.173: INFO: Unable to read jessie_tcp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:22.176: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:22.179: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:22.191: INFO: Lookups using dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7 failed for: [wheezy_udp@dns-test-service.dns-1343.svc.cluster.local wheezy_tcp@dns-test-service.dns-1343.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local jessie_udp@dns-test-service.dns-1343.svc.cluster.local jessie_tcp@dns-test-service.dns-1343.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local] May 1 00:30:27.140: INFO: Unable to read wheezy_udp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:27.143: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:27.146: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:27.150: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:27.170: INFO: Unable to read jessie_udp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:27.172: INFO: Unable to read jessie_tcp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:27.175: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:27.176: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:27.191: INFO: Lookups using dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7 failed for: [wheezy_udp@dns-test-service.dns-1343.svc.cluster.local wheezy_tcp@dns-test-service.dns-1343.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local jessie_udp@dns-test-service.dns-1343.svc.cluster.local jessie_tcp@dns-test-service.dns-1343.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local] May 1 00:30:32.140: INFO: Unable to read wheezy_udp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:32.144: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:32.147: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:32.150: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:32.166: INFO: Unable to read jessie_udp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:32.168: INFO: Unable to read jessie_tcp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:32.171: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:32.173: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:32.187: INFO: Lookups using dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7 failed for: [wheezy_udp@dns-test-service.dns-1343.svc.cluster.local wheezy_tcp@dns-test-service.dns-1343.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local jessie_udp@dns-test-service.dns-1343.svc.cluster.local jessie_tcp@dns-test-service.dns-1343.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local] May 1 00:30:37.141: INFO: Unable to read wheezy_udp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:37.145: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:37.149: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:37.153: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:37.175: INFO: Unable to read jessie_udp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:37.178: INFO: Unable to read jessie_tcp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:37.181: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:37.184: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:37.201: INFO: Lookups using dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7 failed for: [wheezy_udp@dns-test-service.dns-1343.svc.cluster.local wheezy_tcp@dns-test-service.dns-1343.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local jessie_udp@dns-test-service.dns-1343.svc.cluster.local jessie_tcp@dns-test-service.dns-1343.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local] May 1 00:30:42.141: INFO: Unable to read wheezy_udp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:42.145: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:42.149: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:42.152: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:42.167: INFO: Unable to read jessie_udp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:42.169: INFO: Unable to read jessie_tcp@dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:42.171: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:42.173: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local from pod dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7: the server could not find the requested resource (get pods dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7) May 1 00:30:42.188: INFO: Lookups using dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7 failed for: [wheezy_udp@dns-test-service.dns-1343.svc.cluster.local wheezy_tcp@dns-test-service.dns-1343.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local jessie_udp@dns-test-service.dns-1343.svc.cluster.local jessie_tcp@dns-test-service.dns-1343.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1343.svc.cluster.local] May 1 00:30:47.201: INFO: DNS probes using dns-1343/dns-test-d318bc56-d4d2-46a4-bf16-5e3284d34fa7 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:30:48.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1343" for this suite. • [SLOW TEST:47.171 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":290,"completed":132,"skipped":2236,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:30:49.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5696 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5696 STEP: creating replication controller externalsvc in namespace services-5696 I0501 00:30:50.129831 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5696, replica count: 2 I0501 00:30:53.180219 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 00:30:56.180429 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 00:30:59.180608 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 1 00:30:59.275: INFO: Creating new exec pod May 1 00:31:13.293: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5696 execpodlcpz4 -- /bin/sh -x -c nslookup nodeport-service' May 1 00:31:13.553: INFO: stderr: "I0501 00:31:13.460622 1682 log.go:172] (0xc00096d600) (0xc0007f65a0) Create stream\nI0501 00:31:13.460697 1682 log.go:172] (0xc00096d600) (0xc0007f65a0) Stream added, broadcasting: 1\nI0501 00:31:13.463057 1682 log.go:172] (0xc00096d600) Reply frame received for 1\nI0501 00:31:13.463099 1682 log.go:172] (0xc00096d600) (0xc0007fee60) Create stream\nI0501 00:31:13.463113 1682 log.go:172] (0xc00096d600) (0xc0007fee60) Stream added, broadcasting: 3\nI0501 00:31:13.463837 1682 log.go:172] (0xc00096d600) Reply frame received for 3\nI0501 00:31:13.463864 1682 log.go:172] (0xc00096d600) (0xc0007f6f00) Create stream\nI0501 00:31:13.463875 1682 log.go:172] (0xc00096d600) (0xc0007f6f00) Stream added, broadcasting: 5\nI0501 00:31:13.464404 1682 log.go:172] (0xc00096d600) Reply frame received for 5\nI0501 00:31:13.539142 1682 log.go:172] (0xc00096d600) Data frame received for 5\nI0501 00:31:13.539174 1682 log.go:172] (0xc0007f6f00) (5) Data frame handling\nI0501 00:31:13.539193 1682 log.go:172] (0xc0007f6f00) (5) Data frame sent\n+ nslookup nodeport-service\nI0501 00:31:13.546750 1682 log.go:172] (0xc00096d600) Data frame received for 3\nI0501 00:31:13.546770 1682 log.go:172] (0xc0007fee60) (3) Data frame handling\nI0501 00:31:13.546790 1682 log.go:172] (0xc0007fee60) (3) Data frame sent\nI0501 00:31:13.547566 1682 log.go:172] (0xc00096d600) Data frame received for 3\nI0501 00:31:13.547599 1682 log.go:172] (0xc0007fee60) (3) Data frame handling\nI0501 00:31:13.547631 1682 log.go:172] (0xc0007fee60) (3) Data frame sent\nI0501 00:31:13.547942 1682 log.go:172] (0xc00096d600) Data frame received for 3\nI0501 00:31:13.547975 1682 log.go:172] (0xc0007fee60) (3) Data frame handling\nI0501 00:31:13.548004 1682 log.go:172] (0xc00096d600) Data frame received for 5\nI0501 00:31:13.548022 1682 log.go:172] (0xc0007f6f00) (5) Data frame handling\nI0501 00:31:13.549715 1682 log.go:172] (0xc00096d600) Data frame received for 1\nI0501 00:31:13.549740 1682 log.go:172] (0xc0007f65a0) (1) Data frame handling\nI0501 00:31:13.549768 1682 log.go:172] (0xc0007f65a0) (1) Data frame sent\nI0501 00:31:13.549796 1682 log.go:172] (0xc00096d600) (0xc0007f65a0) Stream removed, broadcasting: 1\nI0501 00:31:13.550040 1682 log.go:172] (0xc00096d600) Go away received\nI0501 00:31:13.550136 1682 log.go:172] (0xc00096d600) (0xc0007f65a0) Stream removed, broadcasting: 1\nI0501 00:31:13.550154 1682 log.go:172] (0xc00096d600) (0xc0007fee60) Stream removed, broadcasting: 3\nI0501 00:31:13.550164 1682 log.go:172] (0xc00096d600) (0xc0007f6f00) Stream removed, broadcasting: 5\n" May 1 00:31:13.554: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5696.svc.cluster.local\tcanonical name = externalsvc.services-5696.svc.cluster.local.\nName:\texternalsvc.services-5696.svc.cluster.local\nAddress: 10.98.101.137\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5696, will wait for the garbage collector to delete the pods May 1 00:31:13.611: INFO: Deleting ReplicationController externalsvc took: 4.622303ms May 1 00:31:13.911: INFO: Terminating ReplicationController externalsvc pods took: 300.191433ms May 1 00:31:34.938: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:31:34.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5696" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:45.925 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":290,"completed":133,"skipped":2249,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:31:34.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:31:47.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8366" for this suite. • [SLOW TEST:12.119 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":290,"completed":134,"skipped":2261,"failed":0} [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:31:47.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-cc1a466e-9088-45ae-8ded-09ed669a9e31 in namespace container-probe-5226 May 1 00:31:51.231: INFO: Started pod busybox-cc1a466e-9088-45ae-8ded-09ed669a9e31 in namespace container-probe-5226 STEP: checking the pod's current state and verifying that restartCount is present May 1 00:31:51.234: INFO: Initial restart count of pod busybox-cc1a466e-9088-45ae-8ded-09ed669a9e31 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:35:52.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5226" for this suite. • [SLOW TEST:245.889 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":290,"completed":135,"skipped":2261,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:35:53.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-b85a07b7-98fc-4bef-91e5-0a47a96077d8 STEP: Creating a pod to test consume secrets May 1 00:35:53.140: INFO: Waiting up to 5m0s for pod "pod-secrets-ae638057-689d-454c-96b5-a842498e7e80" in namespace "secrets-9637" to be "Succeeded or Failed" May 1 00:35:53.157: INFO: Pod "pod-secrets-ae638057-689d-454c-96b5-a842498e7e80": Phase="Pending", Reason="", readiness=false. Elapsed: 17.871026ms May 1 00:35:55.221: INFO: Pod "pod-secrets-ae638057-689d-454c-96b5-a842498e7e80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081013674s May 1 00:35:57.224: INFO: Pod "pod-secrets-ae638057-689d-454c-96b5-a842498e7e80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084846436s STEP: Saw pod success May 1 00:35:57.224: INFO: Pod "pod-secrets-ae638057-689d-454c-96b5-a842498e7e80" satisfied condition "Succeeded or Failed" May 1 00:35:57.227: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-ae638057-689d-454c-96b5-a842498e7e80 container secret-volume-test: STEP: delete the pod May 1 00:35:57.274: INFO: Waiting for pod pod-secrets-ae638057-689d-454c-96b5-a842498e7e80 to disappear May 1 00:35:57.280: INFO: Pod pod-secrets-ae638057-689d-454c-96b5-a842498e7e80 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:35:57.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9637" for this suite. STEP: Destroying namespace "secret-namespace-4913" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":290,"completed":136,"skipped":2264,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:35:57.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2324 STEP: creating a selector STEP: Creating the service pods in kubernetes May 1 00:35:57.691: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 1 00:35:57.783: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 1 00:35:59.885: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 1 00:36:01.787: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 00:36:03.787: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 00:36:05.787: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 00:36:07.787: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 00:36:09.787: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 00:36:11.787: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 00:36:13.787: INFO: The status of Pod netserver-0 is Running (Ready = true) May 1 00:36:13.793: INFO: The status of Pod netserver-1 is Running (Ready = false) May 1 00:36:15.798: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 1 00:36:19.872: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.122:8080/dial?request=hostname&protocol=http&host=10.244.1.121&port=8080&tries=1'] Namespace:pod-network-test-2324 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:36:19.872: INFO: >>> kubeConfig: /root/.kube/config I0501 00:36:19.906172 7 log.go:172] (0xc00282a000) (0xc000fa1ae0) Create stream I0501 00:36:19.906209 7 log.go:172] (0xc00282a000) (0xc000fa1ae0) Stream added, broadcasting: 1 I0501 00:36:19.908273 7 log.go:172] (0xc00282a000) Reply frame received for 1 I0501 00:36:19.908308 7 log.go:172] (0xc00282a000) (0xc000fa1c20) Create stream I0501 00:36:19.908319 7 log.go:172] (0xc00282a000) (0xc000fa1c20) Stream added, broadcasting: 3 I0501 00:36:19.909555 7 log.go:172] (0xc00282a000) Reply frame received for 3 I0501 00:36:19.909610 7 log.go:172] (0xc00282a000) (0xc000e661e0) Create stream I0501 00:36:19.909632 7 log.go:172] (0xc00282a000) (0xc000e661e0) Stream added, broadcasting: 5 I0501 00:36:19.910631 7 log.go:172] (0xc00282a000) Reply frame received for 5 I0501 00:36:20.004826 7 log.go:172] (0xc00282a000) Data frame received for 3 I0501 00:36:20.004859 7 log.go:172] (0xc000fa1c20) (3) Data frame handling I0501 00:36:20.004908 7 log.go:172] (0xc000fa1c20) (3) Data frame sent I0501 00:36:20.005608 7 log.go:172] (0xc00282a000) Data frame received for 5 I0501 00:36:20.005640 7 log.go:172] (0xc000e661e0) (5) Data frame handling I0501 00:36:20.005710 7 log.go:172] (0xc00282a000) Data frame received for 3 I0501 00:36:20.005735 7 log.go:172] (0xc000fa1c20) (3) Data frame handling I0501 00:36:20.007666 7 log.go:172] (0xc00282a000) Data frame received for 1 I0501 00:36:20.007739 7 log.go:172] (0xc000fa1ae0) (1) Data frame handling I0501 00:36:20.007766 7 log.go:172] (0xc000fa1ae0) (1) Data frame sent I0501 00:36:20.007786 7 log.go:172] (0xc00282a000) (0xc000fa1ae0) Stream removed, broadcasting: 1 I0501 00:36:20.007802 7 log.go:172] (0xc00282a000) Go away received I0501 00:36:20.007941 7 log.go:172] (0xc00282a000) (0xc000fa1ae0) Stream removed, broadcasting: 1 I0501 00:36:20.007966 7 log.go:172] (0xc00282a000) (0xc000fa1c20) Stream removed, broadcasting: 3 I0501 00:36:20.007985 7 log.go:172] (0xc00282a000) (0xc000e661e0) Stream removed, broadcasting: 5 May 1 00:36:20.008: INFO: Waiting for responses: map[] May 1 00:36:20.011: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.122:8080/dial?request=hostname&protocol=http&host=10.244.2.73&port=8080&tries=1'] Namespace:pod-network-test-2324 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:36:20.011: INFO: >>> kubeConfig: /root/.kube/config I0501 00:36:20.043946 7 log.go:172] (0xc002e0e4d0) (0xc000e66780) Create stream I0501 00:36:20.043980 7 log.go:172] (0xc002e0e4d0) (0xc000e66780) Stream added, broadcasting: 1 I0501 00:36:20.046174 7 log.go:172] (0xc002e0e4d0) Reply frame received for 1 I0501 00:36:20.046212 7 log.go:172] (0xc002e0e4d0) (0xc00070cc80) Create stream I0501 00:36:20.046226 7 log.go:172] (0xc002e0e4d0) (0xc00070cc80) Stream added, broadcasting: 3 I0501 00:36:20.047041 7 log.go:172] (0xc002e0e4d0) Reply frame received for 3 I0501 00:36:20.047075 7 log.go:172] (0xc002e0e4d0) (0xc001b6b180) Create stream I0501 00:36:20.047089 7 log.go:172] (0xc002e0e4d0) (0xc001b6b180) Stream added, broadcasting: 5 I0501 00:36:20.047902 7 log.go:172] (0xc002e0e4d0) Reply frame received for 5 I0501 00:36:20.115961 7 log.go:172] (0xc002e0e4d0) Data frame received for 3 I0501 00:36:20.115995 7 log.go:172] (0xc00070cc80) (3) Data frame handling I0501 00:36:20.116015 7 log.go:172] (0xc00070cc80) (3) Data frame sent I0501 00:36:20.116696 7 log.go:172] (0xc002e0e4d0) Data frame received for 3 I0501 00:36:20.116748 7 log.go:172] (0xc00070cc80) (3) Data frame handling I0501 00:36:20.116768 7 log.go:172] (0xc002e0e4d0) Data frame received for 5 I0501 00:36:20.116773 7 log.go:172] (0xc001b6b180) (5) Data frame handling I0501 00:36:20.118402 7 log.go:172] (0xc002e0e4d0) Data frame received for 1 I0501 00:36:20.118426 7 log.go:172] (0xc000e66780) (1) Data frame handling I0501 00:36:20.118441 7 log.go:172] (0xc000e66780) (1) Data frame sent I0501 00:36:20.118562 7 log.go:172] (0xc002e0e4d0) (0xc000e66780) Stream removed, broadcasting: 1 I0501 00:36:20.118611 7 log.go:172] (0xc002e0e4d0) Go away received I0501 00:36:20.118688 7 log.go:172] (0xc002e0e4d0) (0xc000e66780) Stream removed, broadcasting: 1 I0501 00:36:20.118706 7 log.go:172] (0xc002e0e4d0) (0xc00070cc80) Stream removed, broadcasting: 3 I0501 00:36:20.118715 7 log.go:172] (0xc002e0e4d0) (0xc001b6b180) Stream removed, broadcasting: 5 May 1 00:36:20.118: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:36:20.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2324" for this suite. • [SLOW TEST:22.606 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":290,"completed":137,"skipped":2275,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:36:20.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 1 00:36:20.275: INFO: Create a RollingUpdate DaemonSet May 1 00:36:20.280: INFO: Check that daemon pods launch on every node of the cluster May 1 00:36:20.312: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:36:20.328: INFO: Number of nodes with available pods: 0 May 1 00:36:20.328: INFO: Node latest-worker is running more than one daemon pod May 1 00:36:21.334: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:36:21.339: INFO: Number of nodes with available pods: 0 May 1 00:36:21.339: INFO: Node latest-worker is running more than one daemon pod May 1 00:36:22.482: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:36:22.485: INFO: Number of nodes with available pods: 0 May 1 00:36:22.485: INFO: Node latest-worker is running more than one daemon pod May 1 00:36:23.333: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:36:23.337: INFO: Number of nodes with available pods: 0 May 1 00:36:23.337: INFO: Node latest-worker is running more than one daemon pod May 1 00:36:24.334: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:36:24.337: INFO: Number of nodes with available pods: 1 May 1 00:36:24.337: INFO: Node latest-worker is running more than one daemon pod May 1 00:36:25.336: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:36:25.339: INFO: Number of nodes with available pods: 2 May 1 00:36:25.339: INFO: Number of running nodes: 2, number of available pods: 2 May 1 00:36:25.339: INFO: Update the DaemonSet to trigger a rollout May 1 00:36:25.346: INFO: Updating DaemonSet daemon-set May 1 00:36:35.438: INFO: Roll back the DaemonSet before rollout is complete May 1 00:36:35.446: INFO: Updating DaemonSet daemon-set May 1 00:36:35.446: INFO: Make sure DaemonSet rollback is complete May 1 00:36:35.470: INFO: Wrong image for pod: daemon-set-5hgf5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 1 00:36:35.470: INFO: Pod daemon-set-5hgf5 is not available May 1 00:36:35.486: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:36:36.490: INFO: Wrong image for pod: daemon-set-5hgf5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 1 00:36:36.490: INFO: Pod daemon-set-5hgf5 is not available May 1 00:36:36.494: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:36:37.490: INFO: Wrong image for pod: daemon-set-5hgf5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 1 00:36:37.490: INFO: Pod daemon-set-5hgf5 is not available May 1 00:36:37.493: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 00:36:38.491: INFO: Pod daemon-set-xn85r is not available May 1 00:36:38.495: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3843, will wait for the garbage collector to delete the pods May 1 00:36:38.561: INFO: Deleting DaemonSet.extensions daemon-set took: 6.835976ms May 1 00:36:38.861: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.265363ms May 1 00:36:45.365: INFO: Number of nodes with available pods: 0 May 1 00:36:45.365: INFO: Number of running nodes: 0, number of available pods: 0 May 1 00:36:45.367: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3843/daemonsets","resourceVersion":"457428"},"items":null} May 1 00:36:45.369: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3843/pods","resourceVersion":"457428"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:36:45.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3843" for this suite. • [SLOW TEST:25.255 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":290,"completed":138,"skipped":2282,"failed":0} SSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:36:45.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2979, will wait for the garbage collector to delete the pods May 1 00:36:51.525: INFO: Deleting Job.batch foo took: 6.177385ms May 1 00:36:51.625: INFO: Terminating Job.batch foo pods took: 100.41136ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:37:35.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2979" for this suite. • [SLOW TEST:49.955 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":290,"completed":139,"skipped":2286,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:37:35.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 1 00:37:35.387: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:37:36.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3741" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":290,"completed":140,"skipped":2288,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:37:36.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-a99b66a2-5852-474b-8962-d39634e2c20a STEP: Creating a pod to test consume configMaps May 1 00:37:36.712: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-802f9c7b-00e0-4cd8-9d47-d3ad2fb5b430" in namespace "projected-552" to be "Succeeded or Failed" May 1 00:37:36.719: INFO: Pod "pod-projected-configmaps-802f9c7b-00e0-4cd8-9d47-d3ad2fb5b430": Phase="Pending", Reason="", readiness=false. Elapsed: 7.728594ms May 1 00:37:38.724: INFO: Pod "pod-projected-configmaps-802f9c7b-00e0-4cd8-9d47-d3ad2fb5b430": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011906078s May 1 00:37:40.727: INFO: Pod "pod-projected-configmaps-802f9c7b-00e0-4cd8-9d47-d3ad2fb5b430": Phase="Running", Reason="", readiness=true. Elapsed: 4.015621798s May 1 00:37:42.732: INFO: Pod "pod-projected-configmaps-802f9c7b-00e0-4cd8-9d47-d3ad2fb5b430": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020085771s STEP: Saw pod success May 1 00:37:42.732: INFO: Pod "pod-projected-configmaps-802f9c7b-00e0-4cd8-9d47-d3ad2fb5b430" satisfied condition "Succeeded or Failed" May 1 00:37:42.735: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-802f9c7b-00e0-4cd8-9d47-d3ad2fb5b430 container projected-configmap-volume-test: STEP: delete the pod May 1 00:37:42.791: INFO: Waiting for pod pod-projected-configmaps-802f9c7b-00e0-4cd8-9d47-d3ad2fb5b430 to disappear May 1 00:37:42.803: INFO: Pod pod-projected-configmaps-802f9c7b-00e0-4cd8-9d47-d3ad2fb5b430 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:37:42.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-552" for this suite. • [SLOW TEST:6.262 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":290,"completed":141,"skipped":2291,"failed":0} [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:37:42.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-ec9e0670-91c4-4143-ade4-dccc44e5708d STEP: Creating a pod to test consume secrets May 1 00:37:42.901: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-da37b9d3-8224-4157-b669-e35f49a8548f" in namespace "projected-6748" to be "Succeeded or Failed" May 1 00:37:42.922: INFO: Pod "pod-projected-secrets-da37b9d3-8224-4157-b669-e35f49a8548f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.085408ms May 1 00:37:44.925: INFO: Pod "pod-projected-secrets-da37b9d3-8224-4157-b669-e35f49a8548f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023797545s May 1 00:37:46.945: INFO: Pod "pod-projected-secrets-da37b9d3-8224-4157-b669-e35f49a8548f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043689111s STEP: Saw pod success May 1 00:37:46.945: INFO: Pod "pod-projected-secrets-da37b9d3-8224-4157-b669-e35f49a8548f" satisfied condition "Succeeded or Failed" May 1 00:37:46.948: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-da37b9d3-8224-4157-b669-e35f49a8548f container secret-volume-test: STEP: delete the pod May 1 00:37:47.020: INFO: Waiting for pod pod-projected-secrets-da37b9d3-8224-4157-b669-e35f49a8548f to disappear May 1 00:37:47.031: INFO: Pod pod-projected-secrets-da37b9d3-8224-4157-b669-e35f49a8548f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:37:47.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6748" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":290,"completed":142,"skipped":2291,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:37:47.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 00:37:48.007: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 00:37:50.035: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890268, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890268, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890268, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890267, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 00:37:53.072: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:37:53.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9020" for this suite. STEP: Destroying namespace "webhook-9020-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.364 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":290,"completed":143,"skipped":2316,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:37:53.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 1 00:37:53.471: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c75d1312-6afe-4338-a4a7-07c1341fb942" in namespace "projected-1515" to be "Succeeded or Failed" May 1 00:37:53.475: INFO: Pod "downwardapi-volume-c75d1312-6afe-4338-a4a7-07c1341fb942": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183604ms May 1 00:37:55.480: INFO: Pod "downwardapi-volume-c75d1312-6afe-4338-a4a7-07c1341fb942": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008799784s May 1 00:37:57.484: INFO: Pod "downwardapi-volume-c75d1312-6afe-4338-a4a7-07c1341fb942": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013204348s STEP: Saw pod success May 1 00:37:57.484: INFO: Pod "downwardapi-volume-c75d1312-6afe-4338-a4a7-07c1341fb942" satisfied condition "Succeeded or Failed" May 1 00:37:57.487: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-c75d1312-6afe-4338-a4a7-07c1341fb942 container client-container: STEP: delete the pod May 1 00:37:57.519: INFO: Waiting for pod downwardapi-volume-c75d1312-6afe-4338-a4a7-07c1341fb942 to disappear May 1 00:37:57.529: INFO: Pod downwardapi-volume-c75d1312-6afe-4338-a4a7-07c1341fb942 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:37:57.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1515" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":290,"completed":144,"skipped":2319,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:37:57.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 1 00:37:57.622: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-deed52dd-e26b-4fcb-9f99-63bfbd5aa3fc" in namespace "security-context-test-7103" to be "Succeeded or Failed" May 1 00:37:57.624: INFO: Pod "busybox-privileged-false-deed52dd-e26b-4fcb-9f99-63bfbd5aa3fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37234ms May 1 00:37:59.629: INFO: Pod "busybox-privileged-false-deed52dd-e26b-4fcb-9f99-63bfbd5aa3fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006509253s May 1 00:38:01.634: INFO: Pod "busybox-privileged-false-deed52dd-e26b-4fcb-9f99-63bfbd5aa3fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011641892s May 1 00:38:01.634: INFO: Pod "busybox-privileged-false-deed52dd-e26b-4fcb-9f99-63bfbd5aa3fc" satisfied condition "Succeeded or Failed" May 1 00:38:01.641: INFO: Got logs for pod "busybox-privileged-false-deed52dd-e26b-4fcb-9f99-63bfbd5aa3fc": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:38:01.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7103" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":145,"skipped":2330,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:38:01.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 1 00:38:06.310: INFO: Successfully updated pod "annotationupdate666f07b5-e1fd-4c40-93c7-e7f0328b0cb9" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:38:08.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9099" for this suite. • [SLOW TEST:6.701 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":290,"completed":146,"skipped":2333,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:38:08.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 1 00:38:12.477: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:38:12.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4863" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":290,"completed":147,"skipped":2351,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:38:12.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 1 00:38:12.662: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5490 /api/v1/namespaces/watch-5490/configmaps/e2e-watch-test-label-changed 24ae6318-d321-4d8b-9904-e58d352c3f67 457988 0 2020-05-01 00:38:12 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-01 00:38:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 1 00:38:12.662: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5490 /api/v1/namespaces/watch-5490/configmaps/e2e-watch-test-label-changed 24ae6318-d321-4d8b-9904-e58d352c3f67 457989 0 2020-05-01 00:38:12 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-01 00:38:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 1 00:38:12.662: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5490 /api/v1/namespaces/watch-5490/configmaps/e2e-watch-test-label-changed 24ae6318-d321-4d8b-9904-e58d352c3f67 457990 0 2020-05-01 00:38:12 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-01 00:38:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 1 00:38:22.700: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5490 /api/v1/namespaces/watch-5490/configmaps/e2e-watch-test-label-changed 24ae6318-d321-4d8b-9904-e58d352c3f67 458038 0 2020-05-01 00:38:12 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-01 00:38:22 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 1 00:38:22.700: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5490 /api/v1/namespaces/watch-5490/configmaps/e2e-watch-test-label-changed 24ae6318-d321-4d8b-9904-e58d352c3f67 458039 0 2020-05-01 00:38:12 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-01 00:38:22 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 1 00:38:22.701: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5490 /api/v1/namespaces/watch-5490/configmaps/e2e-watch-test-label-changed 24ae6318-d321-4d8b-9904-e58d352c3f67 458040 0 2020-05-01 00:38:12 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-01 00:38:22 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:38:22.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5490" for this suite. • [SLOW TEST:10.150 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":290,"completed":148,"skipped":2420,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:38:22.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 1 00:38:22.778: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:38:26.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7607" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":290,"completed":149,"skipped":2480,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:38:26.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 1 00:38:26.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' May 1 00:38:27.177: INFO: stderr: "" May 1 00:38:27.177: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:38:27.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8038" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":290,"completed":150,"skipped":2484,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:38:27.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 1 00:38:33.314: INFO: Pod pod-hostip-4c3c1fba-9ab7-4c7f-8940-4b22f5387a54 has hostIP: 172.17.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:38:33.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9322" for this suite. • [SLOW TEST:6.126 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":290,"completed":151,"skipped":2501,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:38:33.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-33e539c5-b970-4593-8bfa-8df108cb87a6 STEP: Creating configMap with name cm-test-opt-upd-99403339-45b2-4794-a158-481e7a10a56e STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-33e539c5-b970-4593-8bfa-8df108cb87a6 STEP: Updating configmap cm-test-opt-upd-99403339-45b2-4794-a158-481e7a10a56e STEP: Creating configMap with name cm-test-opt-create-4d788f25-173c-412e-ac65-0ec41fd4201a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:38:41.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-334" for this suite. • [SLOW TEST:8.282 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":290,"completed":152,"skipped":2506,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:38:41.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 1 00:38:41.682: INFO: Waiting up to 5m0s for pod "var-expansion-6b6abdbf-a890-44db-816a-417ffbc10dfc" in namespace "var-expansion-4292" to be "Succeeded or Failed" May 1 00:38:41.696: INFO: Pod "var-expansion-6b6abdbf-a890-44db-816a-417ffbc10dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.460039ms May 1 00:38:43.743: INFO: Pod "var-expansion-6b6abdbf-a890-44db-816a-417ffbc10dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060674818s May 1 00:38:45.748: INFO: Pod "var-expansion-6b6abdbf-a890-44db-816a-417ffbc10dfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064984544s STEP: Saw pod success May 1 00:38:45.748: INFO: Pod "var-expansion-6b6abdbf-a890-44db-816a-417ffbc10dfc" satisfied condition "Succeeded or Failed" May 1 00:38:45.751: INFO: Trying to get logs from node latest-worker2 pod var-expansion-6b6abdbf-a890-44db-816a-417ffbc10dfc container dapi-container: STEP: delete the pod May 1 00:38:45.777: INFO: Waiting for pod var-expansion-6b6abdbf-a890-44db-816a-417ffbc10dfc to disappear May 1 00:38:45.899: INFO: Pod var-expansion-6b6abdbf-a890-44db-816a-417ffbc10dfc no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:38:45.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4292" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":290,"completed":153,"skipped":2536,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:38:45.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 1 00:38:46.131: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5eac9f73-7455-4d2f-b0f1-c1358f4573e3" in namespace "projected-3688" to be "Succeeded or Failed" May 1 00:38:46.135: INFO: Pod "downwardapi-volume-5eac9f73-7455-4d2f-b0f1-c1358f4573e3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.5519ms May 1 00:38:48.139: INFO: Pod "downwardapi-volume-5eac9f73-7455-4d2f-b0f1-c1358f4573e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007578791s May 1 00:38:50.143: INFO: Pod "downwardapi-volume-5eac9f73-7455-4d2f-b0f1-c1358f4573e3": Phase="Running", Reason="", readiness=true. Elapsed: 4.011737149s May 1 00:38:52.147: INFO: Pod "downwardapi-volume-5eac9f73-7455-4d2f-b0f1-c1358f4573e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015993161s STEP: Saw pod success May 1 00:38:52.147: INFO: Pod "downwardapi-volume-5eac9f73-7455-4d2f-b0f1-c1358f4573e3" satisfied condition "Succeeded or Failed" May 1 00:38:52.151: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-5eac9f73-7455-4d2f-b0f1-c1358f4573e3 container client-container: STEP: delete the pod May 1 00:38:52.206: INFO: Waiting for pod downwardapi-volume-5eac9f73-7455-4d2f-b0f1-c1358f4573e3 to disappear May 1 00:38:52.213: INFO: Pod downwardapi-volume-5eac9f73-7455-4d2f-b0f1-c1358f4573e3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:38:52.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3688" for this suite. • [SLOW TEST:6.313 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":154,"skipped":2548,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:38:52.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 1 00:38:52.264: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 1 00:38:54.326: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:38:55.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8243" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":290,"completed":155,"skipped":2596,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:38:55.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 1 00:39:06.092: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 00:39:06.111: INFO: Pod pod-with-prestop-http-hook still exists May 1 00:39:08.111: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 00:39:08.115: INFO: Pod pod-with-prestop-http-hook still exists May 1 00:39:10.111: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 00:39:10.114: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:39:10.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2237" for this suite. • [SLOW TEST:14.461 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":290,"completed":156,"skipped":2604,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:39:10.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-c62edecc-78a1-4a2f-9412-bd1c40e52942 STEP: Creating a pod to test consume secrets May 1 00:39:10.192: INFO: Waiting up to 5m0s for pod "pod-secrets-431b6f0f-ef38-468b-83b4-dbaa59571825" in namespace "secrets-3304" to be "Succeeded or Failed" May 1 00:39:10.196: INFO: Pod "pod-secrets-431b6f0f-ef38-468b-83b4-dbaa59571825": Phase="Pending", Reason="", readiness=false. Elapsed: 3.799516ms May 1 00:39:12.200: INFO: Pod "pod-secrets-431b6f0f-ef38-468b-83b4-dbaa59571825": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007662521s May 1 00:39:14.204: INFO: Pod "pod-secrets-431b6f0f-ef38-468b-83b4-dbaa59571825": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012160492s STEP: Saw pod success May 1 00:39:14.204: INFO: Pod "pod-secrets-431b6f0f-ef38-468b-83b4-dbaa59571825" satisfied condition "Succeeded or Failed" May 1 00:39:14.207: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-431b6f0f-ef38-468b-83b4-dbaa59571825 container secret-volume-test: STEP: delete the pod May 1 00:39:14.254: INFO: Waiting for pod pod-secrets-431b6f0f-ef38-468b-83b4-dbaa59571825 to disappear May 1 00:39:14.262: INFO: Pod pod-secrets-431b6f0f-ef38-468b-83b4-dbaa59571825 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:39:14.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3304" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":290,"completed":157,"skipped":2615,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:39:14.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 00:39:15.122: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 00:39:17.194: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890355, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890355, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890355, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890355, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 00:39:19.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890355, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890355, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890355, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890355, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 00:39:22.227: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 1 00:39:26.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-6254 to-be-attached-pod -i -c=container1' May 1 00:39:29.286: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:39:29.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6254" for this suite. STEP: Destroying namespace "webhook-6254-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.132 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":290,"completed":158,"skipped":2629,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:39:29.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 1 00:39:29.526: INFO: Pod name pod-release: Found 0 pods out of 1 May 1 00:39:34.558: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:39:34.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6886" for this suite. • [SLOW TEST:5.482 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":290,"completed":159,"skipped":2651,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:39:34.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 1 00:39:35.600: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 1 00:39:35.676: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 1 00:39:35.676: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 1 00:39:35.773: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 1 00:39:35.773: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 1 00:39:36.289: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 1 00:39:36.289: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 1 00:39:43.591: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:39:43.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-9925" for this suite. • [SLOW TEST:8.774 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":290,"completed":160,"skipped":2668,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:39:43.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 1 00:39:43.747: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 1 00:39:43.770: INFO: Pod name sample-pod: Found 0 pods out of 1 May 1 00:39:48.779: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 1 00:39:48.779: INFO: Creating deployment "test-rolling-update-deployment" May 1 00:39:48.796: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 1 00:39:48.808: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 1 00:39:50.976: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 1 00:39:50.978: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890388, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890388, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890388, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890388, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 00:39:52.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890388, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890388, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890388, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890388, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 00:39:54.982: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 1 00:39:54.991: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2056 /apis/apps/v1/namespaces/deployment-2056/deployments/test-rolling-update-deployment a453798a-9555-41e5-bd08-32737de875e6 458825 1 2020-05-01 00:39:48 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-01 00:39:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-01 00:39:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b87718 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-01 00:39:48 +0000 UTC,LastTransitionTime:2020-05-01 00:39:48 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-05-01 00:39:53 +0000 UTC,LastTransitionTime:2020-05-01 00:39:48 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 1 00:39:54.994: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-2056 /apis/apps/v1/namespaces/deployment-2056/replicasets/test-rolling-update-deployment-df7bb669b 9c819469-4e12-4f15-b4f4-a40ed0605472 458812 1 2020-05-01 00:39:48 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment a453798a-9555-41e5-bd08-32737de875e6 0xc002050c40 0xc002050c41}] [] [{kube-controller-manager Update apps/v1 2020-05-01 00:39:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a453798a-9555-41e5-bd08-32737de875e6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002050cb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 1 00:39:54.994: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 1 00:39:54.994: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2056 /apis/apps/v1/namespaces/deployment-2056/replicasets/test-rolling-update-controller 53c7b7dc-4e39-41fb-98ef-4461784896f5 458824 2 2020-05-01 00:39:43 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment a453798a-9555-41e5-bd08-32737de875e6 0xc002050b37 0xc002050b38}] [] [{e2e.test Update apps/v1 2020-05-01 00:39:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-01 00:39:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a453798a-9555-41e5-bd08-32737de875e6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002050bd8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 1 00:39:54.997: INFO: Pod "test-rolling-update-deployment-df7bb669b-n566b" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-n566b test-rolling-update-deployment-df7bb669b- deployment-2056 /api/v1/namespaces/deployment-2056/pods/test-rolling-update-deployment-df7bb669b-n566b fe7a3299-4421-4a9b-8eb2-0f9eed7eea85 458811 0 2020-05-01 00:39:48 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b 9c819469-4e12-4f15-b4f4-a40ed0605472 0xc0020511d0 0xc0020511d1}] [] [{kube-controller-manager Update v1 2020-05-01 00:39:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9c819469-4e12-4f15-b4f4-a40ed0605472\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 00:39:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.139\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vmvtx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vmvtx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vmvtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 00:39:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 00:39:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 00:39:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 00:39:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.139,StartTime:2020-05-01 00:39:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 00:39:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://48e5f2b65cd268d0b9d728641f2758d064885010bf8e944c288104322f7983d3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.139,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:39:54.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2056" for this suite. • [SLOW TEST:11.419 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":290,"completed":161,"skipped":2676,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:39:55.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 1 00:39:55.140: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:40:12.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9253" for this suite. • [SLOW TEST:16.978 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":290,"completed":162,"skipped":2703,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:40:12.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:40:28.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2380" for this suite. • [SLOW TEST:16.178 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":290,"completed":163,"skipped":2707,"failed":0} S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:40:28.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 1 00:40:32.397: INFO: Waiting up to 5m0s for pod "client-envvars-64e3fe01-e52e-45cf-9b9a-d11f74a1bc1f" in namespace "pods-1547" to be "Succeeded or Failed" May 1 00:40:32.457: INFO: Pod "client-envvars-64e3fe01-e52e-45cf-9b9a-d11f74a1bc1f": Phase="Pending", Reason="", readiness=false. Elapsed: 60.344794ms May 1 00:40:34.462: INFO: Pod "client-envvars-64e3fe01-e52e-45cf-9b9a-d11f74a1bc1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064875081s May 1 00:40:36.467: INFO: Pod "client-envvars-64e3fe01-e52e-45cf-9b9a-d11f74a1bc1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069993632s STEP: Saw pod success May 1 00:40:36.467: INFO: Pod "client-envvars-64e3fe01-e52e-45cf-9b9a-d11f74a1bc1f" satisfied condition "Succeeded or Failed" May 1 00:40:36.470: INFO: Trying to get logs from node latest-worker pod client-envvars-64e3fe01-e52e-45cf-9b9a-d11f74a1bc1f container env3cont: STEP: delete the pod May 1 00:40:36.495: INFO: Waiting for pod client-envvars-64e3fe01-e52e-45cf-9b9a-d11f74a1bc1f to disappear May 1 00:40:36.522: INFO: Pod client-envvars-64e3fe01-e52e-45cf-9b9a-d11f74a1bc1f no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:40:36.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1547" for this suite. • [SLOW TEST:8.329 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":290,"completed":164,"skipped":2708,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:40:36.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1100 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1100 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1100 May 1 00:40:36.690: INFO: Found 0 stateful pods, waiting for 1 May 1 00:40:46.702: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 1 00:40:46.705: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1100 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 1 00:40:46.966: INFO: stderr: "I0501 00:40:46.847053 1756 log.go:172] (0xc000b754a0) (0xc000709c20) Create stream\nI0501 00:40:46.847107 1756 log.go:172] (0xc000b754a0) (0xc000709c20) Stream added, broadcasting: 1\nI0501 00:40:46.852012 1756 log.go:172] (0xc000b754a0) Reply frame received for 1\nI0501 00:40:46.852079 1756 log.go:172] (0xc000b754a0) (0xc000856640) Create stream\nI0501 00:40:46.852110 1756 log.go:172] (0xc000b754a0) (0xc000856640) Stream added, broadcasting: 3\nI0501 00:40:46.854642 1756 log.go:172] (0xc000b754a0) Reply frame received for 3\nI0501 00:40:46.854686 1756 log.go:172] (0xc000b754a0) (0xc00053f9a0) Create stream\nI0501 00:40:46.854706 1756 log.go:172] (0xc000b754a0) (0xc00053f9a0) Stream added, broadcasting: 5\nI0501 00:40:46.855530 1756 log.go:172] (0xc000b754a0) Reply frame received for 5\nI0501 00:40:46.932516 1756 log.go:172] (0xc000b754a0) Data frame received for 5\nI0501 00:40:46.932553 1756 log.go:172] (0xc00053f9a0) (5) Data frame handling\nI0501 00:40:46.932587 1756 log.go:172] (0xc00053f9a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 00:40:46.957107 1756 log.go:172] (0xc000b754a0) Data frame received for 3\nI0501 00:40:46.957346 1756 log.go:172] (0xc000856640) (3) Data frame handling\nI0501 00:40:46.957362 1756 log.go:172] (0xc000856640) (3) Data frame sent\nI0501 00:40:46.957383 1756 log.go:172] (0xc000b754a0) Data frame received for 3\nI0501 00:40:46.957419 1756 log.go:172] (0xc000856640) (3) Data frame handling\nI0501 00:40:46.957452 1756 log.go:172] (0xc000b754a0) Data frame received for 5\nI0501 00:40:46.957470 1756 log.go:172] (0xc00053f9a0) (5) Data frame handling\nI0501 00:40:46.959117 1756 log.go:172] (0xc000b754a0) Data frame received for 1\nI0501 00:40:46.959150 1756 log.go:172] (0xc000709c20) (1) Data frame handling\nI0501 00:40:46.959283 1756 log.go:172] (0xc000709c20) (1) Data frame sent\nI0501 00:40:46.959313 1756 log.go:172] (0xc000b754a0) (0xc000709c20) Stream removed, broadcasting: 1\nI0501 00:40:46.959347 1756 log.go:172] (0xc000b754a0) Go away received\nI0501 00:40:46.960822 1756 log.go:172] (0xc000b754a0) (0xc000709c20) Stream removed, broadcasting: 1\nI0501 00:40:46.960855 1756 log.go:172] (0xc000b754a0) (0xc000856640) Stream removed, broadcasting: 3\nI0501 00:40:46.960885 1756 log.go:172] (0xc000b754a0) (0xc00053f9a0) Stream removed, broadcasting: 5\n" May 1 00:40:46.967: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 1 00:40:46.967: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 1 00:40:46.971: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 1 00:40:56.974: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 1 00:40:56.974: INFO: Waiting for statefulset status.replicas updated to 0 May 1 00:40:56.990: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999425s May 1 00:40:57.993: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994344342s May 1 00:40:59.014: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991087922s May 1 00:41:00.019: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.970229241s May 1 00:41:01.023: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.964702177s May 1 00:41:02.027: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.960719196s May 1 00:41:03.032: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.956548583s May 1 00:41:04.037: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.951849923s May 1 00:41:05.041: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.9472109s May 1 00:41:06.046: INFO: Verifying statefulset ss doesn't scale past 1 for another 942.775543ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1100 May 1 00:41:07.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1100 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 00:41:07.269: INFO: stderr: "I0501 00:41:07.190798 1780 log.go:172] (0xc000b15130) (0xc000bfc320) Create stream\nI0501 00:41:07.190859 1780 log.go:172] (0xc000b15130) (0xc000bfc320) Stream added, broadcasting: 1\nI0501 00:41:07.196960 1780 log.go:172] (0xc000b15130) Reply frame received for 1\nI0501 00:41:07.197014 1780 log.go:172] (0xc000b15130) (0xc0006b4c80) Create stream\nI0501 00:41:07.197028 1780 log.go:172] (0xc000b15130) (0xc0006b4c80) Stream added, broadcasting: 3\nI0501 00:41:07.199504 1780 log.go:172] (0xc000b15130) Reply frame received for 3\nI0501 00:41:07.199533 1780 log.go:172] (0xc000b15130) (0xc0006ae500) Create stream\nI0501 00:41:07.199543 1780 log.go:172] (0xc000b15130) (0xc0006ae500) Stream added, broadcasting: 5\nI0501 00:41:07.200802 1780 log.go:172] (0xc000b15130) Reply frame received for 5\nI0501 00:41:07.263168 1780 log.go:172] (0xc000b15130) Data frame received for 3\nI0501 00:41:07.263195 1780 log.go:172] (0xc0006b4c80) (3) Data frame handling\nI0501 00:41:07.263206 1780 log.go:172] (0xc0006b4c80) (3) Data frame sent\nI0501 00:41:07.263211 1780 log.go:172] (0xc000b15130) Data frame received for 3\nI0501 00:41:07.263216 1780 log.go:172] (0xc0006b4c80) (3) Data frame handling\nI0501 00:41:07.263245 1780 log.go:172] (0xc000b15130) Data frame received for 5\nI0501 00:41:07.263263 1780 log.go:172] (0xc0006ae500) (5) Data frame handling\nI0501 00:41:07.263277 1780 log.go:172] (0xc0006ae500) (5) Data frame sent\nI0501 00:41:07.263285 1780 log.go:172] (0xc000b15130) Data frame received for 5\nI0501 00:41:07.263301 1780 log.go:172] (0xc0006ae500) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0501 00:41:07.264345 1780 log.go:172] (0xc000b15130) Data frame received for 1\nI0501 00:41:07.264360 1780 log.go:172] (0xc000bfc320) (1) Data frame handling\nI0501 00:41:07.264367 1780 log.go:172] (0xc000bfc320) (1) Data frame sent\nI0501 00:41:07.264376 1780 log.go:172] (0xc000b15130) (0xc000bfc320) Stream removed, broadcasting: 1\nI0501 00:41:07.264389 1780 log.go:172] (0xc000b15130) Go away received\nI0501 00:41:07.264736 1780 log.go:172] (0xc000b15130) (0xc000bfc320) Stream removed, broadcasting: 1\nI0501 00:41:07.264754 1780 log.go:172] (0xc000b15130) (0xc0006b4c80) Stream removed, broadcasting: 3\nI0501 00:41:07.264762 1780 log.go:172] (0xc000b15130) (0xc0006ae500) Stream removed, broadcasting: 5\n" May 1 00:41:07.269: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 1 00:41:07.269: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 1 00:41:07.272: INFO: Found 1 stateful pods, waiting for 3 May 1 00:41:17.279: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 1 00:41:17.279: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 1 00:41:17.279: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 1 00:41:17.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1100 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 1 00:41:17.509: INFO: stderr: "I0501 00:41:17.431425 1802 log.go:172] (0xc000998840) (0xc000412000) Create stream\nI0501 00:41:17.431479 1802 log.go:172] (0xc000998840) (0xc000412000) Stream added, broadcasting: 1\nI0501 00:41:17.433670 1802 log.go:172] (0xc000998840) Reply frame received for 1\nI0501 00:41:17.433721 1802 log.go:172] (0xc000998840) (0xc00035d9a0) Create stream\nI0501 00:41:17.433740 1802 log.go:172] (0xc000998840) (0xc00035d9a0) Stream added, broadcasting: 3\nI0501 00:41:17.434595 1802 log.go:172] (0xc000998840) Reply frame received for 3\nI0501 00:41:17.434632 1802 log.go:172] (0xc000998840) (0xc0006e2dc0) Create stream\nI0501 00:41:17.434642 1802 log.go:172] (0xc000998840) (0xc0006e2dc0) Stream added, broadcasting: 5\nI0501 00:41:17.435478 1802 log.go:172] (0xc000998840) Reply frame received for 5\nI0501 00:41:17.502636 1802 log.go:172] (0xc000998840) Data frame received for 5\nI0501 00:41:17.502675 1802 log.go:172] (0xc0006e2dc0) (5) Data frame handling\nI0501 00:41:17.502688 1802 log.go:172] (0xc0006e2dc0) (5) Data frame sent\nI0501 00:41:17.502699 1802 log.go:172] (0xc000998840) Data frame received for 5\nI0501 00:41:17.502708 1802 log.go:172] (0xc0006e2dc0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 00:41:17.502733 1802 log.go:172] (0xc000998840) Data frame received for 3\nI0501 00:41:17.502743 1802 log.go:172] (0xc00035d9a0) (3) Data frame handling\nI0501 00:41:17.502759 1802 log.go:172] (0xc00035d9a0) (3) Data frame sent\nI0501 00:41:17.502769 1802 log.go:172] (0xc000998840) Data frame received for 3\nI0501 00:41:17.502778 1802 log.go:172] (0xc00035d9a0) (3) Data frame handling\nI0501 00:41:17.504118 1802 log.go:172] (0xc000998840) Data frame received for 1\nI0501 00:41:17.504146 1802 log.go:172] (0xc000412000) (1) Data frame handling\nI0501 00:41:17.504165 1802 log.go:172] (0xc000412000) (1) Data frame sent\nI0501 00:41:17.504179 1802 log.go:172] (0xc000998840) (0xc000412000) Stream removed, broadcasting: 1\nI0501 00:41:17.504226 1802 log.go:172] (0xc000998840) Go away received\nI0501 00:41:17.504602 1802 log.go:172] (0xc000998840) (0xc000412000) Stream removed, broadcasting: 1\nI0501 00:41:17.504627 1802 log.go:172] (0xc000998840) (0xc00035d9a0) Stream removed, broadcasting: 3\nI0501 00:41:17.504646 1802 log.go:172] (0xc000998840) (0xc0006e2dc0) Stream removed, broadcasting: 5\n" May 1 00:41:17.509: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 1 00:41:17.509: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 1 00:41:17.509: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1100 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 1 00:41:17.735: INFO: stderr: "I0501 00:41:17.635084 1823 log.go:172] (0xc00092c630) (0xc00044b0e0) Create stream\nI0501 00:41:17.635155 1823 log.go:172] (0xc00092c630) (0xc00044b0e0) Stream added, broadcasting: 1\nI0501 00:41:17.638410 1823 log.go:172] (0xc00092c630) Reply frame received for 1\nI0501 00:41:17.638474 1823 log.go:172] (0xc00092c630) (0xc000576460) Create stream\nI0501 00:41:17.638494 1823 log.go:172] (0xc00092c630) (0xc000576460) Stream added, broadcasting: 3\nI0501 00:41:17.639474 1823 log.go:172] (0xc00092c630) Reply frame received for 3\nI0501 00:41:17.639526 1823 log.go:172] (0xc00092c630) (0xc000576d20) Create stream\nI0501 00:41:17.639543 1823 log.go:172] (0xc00092c630) (0xc000576d20) Stream added, broadcasting: 5\nI0501 00:41:17.640579 1823 log.go:172] (0xc00092c630) Reply frame received for 5\nI0501 00:41:17.702253 1823 log.go:172] (0xc00092c630) Data frame received for 5\nI0501 00:41:17.702275 1823 log.go:172] (0xc000576d20) (5) Data frame handling\nI0501 00:41:17.702288 1823 log.go:172] (0xc000576d20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 00:41:17.726344 1823 log.go:172] (0xc00092c630) Data frame received for 3\nI0501 00:41:17.726392 1823 log.go:172] (0xc000576460) (3) Data frame handling\nI0501 00:41:17.726454 1823 log.go:172] (0xc000576460) (3) Data frame sent\nI0501 00:41:17.726567 1823 log.go:172] (0xc00092c630) Data frame received for 3\nI0501 00:41:17.726595 1823 log.go:172] (0xc000576460) (3) Data frame handling\nI0501 00:41:17.726812 1823 log.go:172] (0xc00092c630) Data frame received for 5\nI0501 00:41:17.726847 1823 log.go:172] (0xc000576d20) (5) Data frame handling\nI0501 00:41:17.728755 1823 log.go:172] (0xc00092c630) Data frame received for 1\nI0501 00:41:17.728780 1823 log.go:172] (0xc00044b0e0) (1) Data frame handling\nI0501 00:41:17.728792 1823 log.go:172] (0xc00044b0e0) (1) Data frame sent\nI0501 00:41:17.728812 1823 log.go:172] (0xc00092c630) (0xc00044b0e0) Stream removed, broadcasting: 1\nI0501 00:41:17.728834 1823 log.go:172] (0xc00092c630) Go away received\nI0501 00:41:17.729509 1823 log.go:172] (0xc00092c630) (0xc00044b0e0) Stream removed, broadcasting: 1\nI0501 00:41:17.729551 1823 log.go:172] (0xc00092c630) (0xc000576460) Stream removed, broadcasting: 3\nI0501 00:41:17.729579 1823 log.go:172] (0xc00092c630) (0xc000576d20) Stream removed, broadcasting: 5\n" May 1 00:41:17.735: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 1 00:41:17.735: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 1 00:41:17.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1100 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 1 00:41:18.047: INFO: stderr: "I0501 00:41:17.917587 1845 log.go:172] (0xc000a8d080) (0xc0006d4dc0) Create stream\nI0501 00:41:17.917639 1845 log.go:172] (0xc000a8d080) (0xc0006d4dc0) Stream added, broadcasting: 1\nI0501 00:41:17.920551 1845 log.go:172] (0xc000a8d080) Reply frame received for 1\nI0501 00:41:17.920580 1845 log.go:172] (0xc000a8d080) (0xc0006d5d60) Create stream\nI0501 00:41:17.920588 1845 log.go:172] (0xc000a8d080) (0xc0006d5d60) Stream added, broadcasting: 3\nI0501 00:41:17.921525 1845 log.go:172] (0xc000a8d080) Reply frame received for 3\nI0501 00:41:17.921561 1845 log.go:172] (0xc000a8d080) (0xc0006e4b40) Create stream\nI0501 00:41:17.921572 1845 log.go:172] (0xc000a8d080) (0xc0006e4b40) Stream added, broadcasting: 5\nI0501 00:41:17.922400 1845 log.go:172] (0xc000a8d080) Reply frame received for 5\nI0501 00:41:17.982986 1845 log.go:172] (0xc000a8d080) Data frame received for 5\nI0501 00:41:17.983020 1845 log.go:172] (0xc0006e4b40) (5) Data frame handling\nI0501 00:41:17.983039 1845 log.go:172] (0xc0006e4b40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 00:41:18.039474 1845 log.go:172] (0xc000a8d080) Data frame received for 3\nI0501 00:41:18.039503 1845 log.go:172] (0xc0006d5d60) (3) Data frame handling\nI0501 00:41:18.039523 1845 log.go:172] (0xc0006d5d60) (3) Data frame sent\nI0501 00:41:18.039573 1845 log.go:172] (0xc000a8d080) Data frame received for 3\nI0501 00:41:18.039583 1845 log.go:172] (0xc0006d5d60) (3) Data frame handling\nI0501 00:41:18.039822 1845 log.go:172] (0xc000a8d080) Data frame received for 5\nI0501 00:41:18.039836 1845 log.go:172] (0xc0006e4b40) (5) Data frame handling\nI0501 00:41:18.042121 1845 log.go:172] (0xc000a8d080) Data frame received for 1\nI0501 00:41:18.042158 1845 log.go:172] (0xc0006d4dc0) (1) Data frame handling\nI0501 00:41:18.042183 1845 log.go:172] (0xc0006d4dc0) (1) Data frame sent\nI0501 00:41:18.042200 1845 log.go:172] (0xc000a8d080) (0xc0006d4dc0) Stream removed, broadcasting: 1\nI0501 00:41:18.042217 1845 log.go:172] (0xc000a8d080) Go away received\nI0501 00:41:18.042586 1845 log.go:172] (0xc000a8d080) (0xc0006d4dc0) Stream removed, broadcasting: 1\nI0501 00:41:18.042609 1845 log.go:172] (0xc000a8d080) (0xc0006d5d60) Stream removed, broadcasting: 3\nI0501 00:41:18.042625 1845 log.go:172] (0xc000a8d080) (0xc0006e4b40) Stream removed, broadcasting: 5\n" May 1 00:41:18.047: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 1 00:41:18.047: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 1 00:41:18.047: INFO: Waiting for statefulset status.replicas updated to 0 May 1 00:41:18.051: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 1 00:41:28.060: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 1 00:41:28.060: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 1 00:41:28.060: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 1 00:41:28.076: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999462s May 1 00:41:29.082: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993083828s May 1 00:41:30.087: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987022897s May 1 00:41:31.092: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981356216s May 1 00:41:32.098: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976252483s May 1 00:41:33.104: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.97035318s May 1 00:41:34.109: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.964883981s May 1 00:41:35.115: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.959349939s May 1 00:41:36.120: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.953934321s May 1 00:41:37.126: INFO: Verifying statefulset ss doesn't scale past 3 for another 948.367746ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1100 May 1 00:41:38.132: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1100 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 00:41:38.333: INFO: stderr: "I0501 00:41:38.271177 1865 log.go:172] (0xc000a2dad0) (0xc000afa320) Create stream\nI0501 00:41:38.271276 1865 log.go:172] (0xc000a2dad0) (0xc000afa320) Stream added, broadcasting: 1\nI0501 00:41:38.276693 1865 log.go:172] (0xc000a2dad0) Reply frame received for 1\nI0501 00:41:38.276740 1865 log.go:172] (0xc000a2dad0) (0xc0005bc780) Create stream\nI0501 00:41:38.276756 1865 log.go:172] (0xc000a2dad0) (0xc0005bc780) Stream added, broadcasting: 3\nI0501 00:41:38.278102 1865 log.go:172] (0xc000a2dad0) Reply frame received for 3\nI0501 00:41:38.278143 1865 log.go:172] (0xc000a2dad0) (0xc0004de6e0) Create stream\nI0501 00:41:38.278153 1865 log.go:172] (0xc000a2dad0) (0xc0004de6e0) Stream added, broadcasting: 5\nI0501 00:41:38.278939 1865 log.go:172] (0xc000a2dad0) Reply frame received for 5\nI0501 00:41:38.326096 1865 log.go:172] (0xc000a2dad0) Data frame received for 3\nI0501 00:41:38.326140 1865 log.go:172] (0xc0005bc780) (3) Data frame handling\nI0501 00:41:38.326161 1865 log.go:172] (0xc0005bc780) (3) Data frame sent\nI0501 00:41:38.326177 1865 log.go:172] (0xc000a2dad0) Data frame received for 3\nI0501 00:41:38.326190 1865 log.go:172] (0xc0005bc780) (3) Data frame handling\nI0501 00:41:38.326240 1865 log.go:172] (0xc000a2dad0) Data frame received for 5\nI0501 00:41:38.326289 1865 log.go:172] (0xc0004de6e0) (5) Data frame handling\nI0501 00:41:38.326343 1865 log.go:172] (0xc0004de6e0) (5) Data frame sent\nI0501 00:41:38.326371 1865 log.go:172] (0xc000a2dad0) Data frame received for 5\nI0501 00:41:38.326396 1865 log.go:172] (0xc0004de6e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0501 00:41:38.327801 1865 log.go:172] (0xc000a2dad0) Data frame received for 1\nI0501 00:41:38.327841 1865 log.go:172] (0xc000afa320) (1) Data frame handling\nI0501 00:41:38.327869 1865 log.go:172] (0xc000afa320) (1) Data frame sent\nI0501 00:41:38.327888 1865 log.go:172] (0xc000a2dad0) (0xc000afa320) Stream removed, broadcasting: 1\nI0501 00:41:38.327924 1865 log.go:172] (0xc000a2dad0) Go away received\nI0501 00:41:38.328235 1865 log.go:172] (0xc000a2dad0) (0xc000afa320) Stream removed, broadcasting: 1\nI0501 00:41:38.328260 1865 log.go:172] (0xc000a2dad0) (0xc0005bc780) Stream removed, broadcasting: 3\nI0501 00:41:38.328269 1865 log.go:172] (0xc000a2dad0) (0xc0004de6e0) Stream removed, broadcasting: 5\n" May 1 00:41:38.333: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 1 00:41:38.333: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 1 00:41:38.333: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1100 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 00:41:38.570: INFO: stderr: "I0501 00:41:38.502806 1888 log.go:172] (0xc000c3f3f0) (0xc000a52640) Create stream\nI0501 00:41:38.502879 1888 log.go:172] (0xc000c3f3f0) (0xc000a52640) Stream added, broadcasting: 1\nI0501 00:41:38.507052 1888 log.go:172] (0xc000c3f3f0) Reply frame received for 1\nI0501 00:41:38.507086 1888 log.go:172] (0xc000c3f3f0) (0xc000858000) Create stream\nI0501 00:41:38.507095 1888 log.go:172] (0xc000c3f3f0) (0xc000858000) Stream added, broadcasting: 3\nI0501 00:41:38.508003 1888 log.go:172] (0xc000c3f3f0) Reply frame received for 3\nI0501 00:41:38.508049 1888 log.go:172] (0xc000c3f3f0) (0xc000722640) Create stream\nI0501 00:41:38.508077 1888 log.go:172] (0xc000c3f3f0) (0xc000722640) Stream added, broadcasting: 5\nI0501 00:41:38.508939 1888 log.go:172] (0xc000c3f3f0) Reply frame received for 5\nI0501 00:41:38.564617 1888 log.go:172] (0xc000c3f3f0) Data frame received for 5\nI0501 00:41:38.564671 1888 log.go:172] (0xc000722640) (5) Data frame handling\nI0501 00:41:38.564688 1888 log.go:172] (0xc000722640) (5) Data frame sent\nI0501 00:41:38.564699 1888 log.go:172] (0xc000c3f3f0) Data frame received for 5\nI0501 00:41:38.564707 1888 log.go:172] (0xc000722640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0501 00:41:38.564744 1888 log.go:172] (0xc000c3f3f0) Data frame received for 3\nI0501 00:41:38.564759 1888 log.go:172] (0xc000858000) (3) Data frame handling\nI0501 00:41:38.564770 1888 log.go:172] (0xc000858000) (3) Data frame sent\nI0501 00:41:38.564776 1888 log.go:172] (0xc000c3f3f0) Data frame received for 3\nI0501 00:41:38.564781 1888 log.go:172] (0xc000858000) (3) Data frame handling\nI0501 00:41:38.566169 1888 log.go:172] (0xc000c3f3f0) Data frame received for 1\nI0501 00:41:38.566181 1888 log.go:172] (0xc000a52640) (1) Data frame handling\nI0501 00:41:38.566188 1888 log.go:172] (0xc000a52640) (1) Data frame sent\nI0501 00:41:38.566197 1888 log.go:172] (0xc000c3f3f0) (0xc000a52640) Stream removed, broadcasting: 1\nI0501 00:41:38.566428 1888 log.go:172] (0xc000c3f3f0) Go away received\nI0501 00:41:38.566455 1888 log.go:172] (0xc000c3f3f0) (0xc000a52640) Stream removed, broadcasting: 1\nI0501 00:41:38.566472 1888 log.go:172] (0xc000c3f3f0) (0xc000858000) Stream removed, broadcasting: 3\nI0501 00:41:38.566483 1888 log.go:172] (0xc000c3f3f0) (0xc000722640) Stream removed, broadcasting: 5\n" May 1 00:41:38.570: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 1 00:41:38.570: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 1 00:41:38.570: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1100 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 00:41:38.796: INFO: stderr: "I0501 00:41:38.712666 1908 log.go:172] (0xc000ba2d10) (0xc000ae4140) Create stream\nI0501 00:41:38.712723 1908 log.go:172] (0xc000ba2d10) (0xc000ae4140) Stream added, broadcasting: 1\nI0501 00:41:38.715357 1908 log.go:172] (0xc000ba2d10) Reply frame received for 1\nI0501 00:41:38.715436 1908 log.go:172] (0xc000ba2d10) (0xc000641b80) Create stream\nI0501 00:41:38.715487 1908 log.go:172] (0xc000ba2d10) (0xc000641b80) Stream added, broadcasting: 3\nI0501 00:41:38.716332 1908 log.go:172] (0xc000ba2d10) Reply frame received for 3\nI0501 00:41:38.716366 1908 log.go:172] (0xc000ba2d10) (0xc000b660a0) Create stream\nI0501 00:41:38.716376 1908 log.go:172] (0xc000ba2d10) (0xc000b660a0) Stream added, broadcasting: 5\nI0501 00:41:38.717541 1908 log.go:172] (0xc000ba2d10) Reply frame received for 5\nI0501 00:41:38.787627 1908 log.go:172] (0xc000ba2d10) Data frame received for 5\nI0501 00:41:38.787672 1908 log.go:172] (0xc000b660a0) (5) Data frame handling\nI0501 00:41:38.787710 1908 log.go:172] (0xc000b660a0) (5) Data frame sent\nI0501 00:41:38.787728 1908 log.go:172] (0xc000ba2d10) Data frame received for 5\nI0501 00:41:38.787741 1908 log.go:172] (0xc000b660a0) (5) Data frame handling\nI0501 00:41:38.787757 1908 log.go:172] (0xc000ba2d10) Data frame received for 3\nI0501 00:41:38.787786 1908 log.go:172] (0xc000641b80) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0501 00:41:38.787804 1908 log.go:172] (0xc000641b80) (3) Data frame sent\nI0501 00:41:38.787844 1908 log.go:172] (0xc000ba2d10) Data frame received for 3\nI0501 00:41:38.787861 1908 log.go:172] (0xc000641b80) (3) Data frame handling\nI0501 00:41:38.790286 1908 log.go:172] (0xc000ba2d10) Data frame received for 1\nI0501 00:41:38.790328 1908 log.go:172] (0xc000ae4140) (1) Data frame handling\nI0501 00:41:38.790348 1908 log.go:172] (0xc000ae4140) (1) Data frame sent\nI0501 00:41:38.790366 1908 log.go:172] (0xc000ba2d10) (0xc000ae4140) Stream removed, broadcasting: 1\nI0501 00:41:38.790398 1908 log.go:172] (0xc000ba2d10) Go away received\nI0501 00:41:38.790840 1908 log.go:172] (0xc000ba2d10) (0xc000ae4140) Stream removed, broadcasting: 1\nI0501 00:41:38.790862 1908 log.go:172] (0xc000ba2d10) (0xc000641b80) Stream removed, broadcasting: 3\nI0501 00:41:38.790878 1908 log.go:172] (0xc000ba2d10) (0xc000b660a0) Stream removed, broadcasting: 5\n" May 1 00:41:38.796: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 1 00:41:38.796: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 1 00:41:38.796: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 1 00:41:58.818: INFO: Deleting all statefulset in ns statefulset-1100 May 1 00:41:58.821: INFO: Scaling statefulset ss to 0 May 1 00:41:58.832: INFO: Waiting for statefulset status.replicas updated to 0 May 1 00:41:58.835: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:41:58.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1100" for this suite. • [SLOW TEST:82.313 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":290,"completed":165,"skipped":2744,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:41:58.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8040 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 1 00:41:59.057: INFO: Found 0 stateful pods, waiting for 3 May 1 00:42:09.061: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 00:42:09.061: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 00:42:09.061: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 1 00:42:19.062: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 00:42:19.062: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 00:42:19.062: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 1 00:42:19.092: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 1 00:42:29.147: INFO: Updating stateful set ss2 May 1 00:42:29.213: INFO: Waiting for Pod statefulset-8040/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 1 00:42:39.860: INFO: Found 2 stateful pods, waiting for 3 May 1 00:42:49.865: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 00:42:49.865: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 00:42:49.865: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 1 00:42:49.890: INFO: Updating stateful set ss2 May 1 00:42:49.942: INFO: Waiting for Pod statefulset-8040/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 1 00:42:59.969: INFO: Updating stateful set ss2 May 1 00:43:00.039: INFO: Waiting for StatefulSet statefulset-8040/ss2 to complete update May 1 00:43:00.039: INFO: Waiting for Pod statefulset-8040/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 1 00:43:10.048: INFO: Deleting all statefulset in ns statefulset-8040 May 1 00:43:10.051: INFO: Scaling statefulset ss2 to 0 May 1 00:43:30.064: INFO: Waiting for statefulset status.replicas updated to 0 May 1 00:43:30.067: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:43:30.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8040" for this suite. • [SLOW TEST:91.210 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":290,"completed":166,"skipped":2762,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:43:30.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 1 00:43:35.389: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:43:35.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8843" for this suite. • [SLOW TEST:5.384 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":290,"completed":167,"skipped":2781,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:43:35.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8355 STEP: creating service affinity-clusterip-transition in namespace services-8355 STEP: creating replication controller affinity-clusterip-transition in namespace services-8355 I0501 00:43:35.667714 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-8355, replica count: 3 I0501 00:43:38.718176 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 00:43:41.718442 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 1 00:43:41.725: INFO: Creating new exec pod May 1 00:43:46.746: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8355 execpod-affinitydrxkr -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 1 00:43:46.999: INFO: stderr: "I0501 00:43:46.900331 1929 log.go:172] (0xc0009b1080) (0xc000a5e0a0) Create stream\nI0501 00:43:46.900381 1929 log.go:172] (0xc0009b1080) (0xc000a5e0a0) Stream added, broadcasting: 1\nI0501 00:43:46.904572 1929 log.go:172] (0xc0009b1080) Reply frame received for 1\nI0501 00:43:46.904627 1929 log.go:172] (0xc0009b1080) (0xc000696000) Create stream\nI0501 00:43:46.904640 1929 log.go:172] (0xc0009b1080) (0xc000696000) Stream added, broadcasting: 3\nI0501 00:43:46.906081 1929 log.go:172] (0xc0009b1080) Reply frame received for 3\nI0501 00:43:46.906123 1929 log.go:172] (0xc0009b1080) (0xc000628640) Create stream\nI0501 00:43:46.906143 1929 log.go:172] (0xc0009b1080) (0xc000628640) Stream added, broadcasting: 5\nI0501 00:43:46.907140 1929 log.go:172] (0xc0009b1080) Reply frame received for 5\nI0501 00:43:46.990412 1929 log.go:172] (0xc0009b1080) Data frame received for 5\nI0501 00:43:46.990444 1929 log.go:172] (0xc000628640) (5) Data frame handling\nI0501 00:43:46.990465 1929 log.go:172] (0xc000628640) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0501 00:43:46.991442 1929 log.go:172] (0xc0009b1080) Data frame received for 5\nI0501 00:43:46.991462 1929 log.go:172] (0xc000628640) (5) Data frame handling\nI0501 00:43:46.991471 1929 log.go:172] (0xc000628640) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0501 00:43:46.991919 1929 log.go:172] (0xc0009b1080) Data frame received for 5\nI0501 00:43:46.991958 1929 log.go:172] (0xc000628640) (5) Data frame handling\nI0501 00:43:46.991983 1929 log.go:172] (0xc0009b1080) Data frame received for 3\nI0501 00:43:46.992004 1929 log.go:172] (0xc000696000) (3) Data frame handling\nI0501 00:43:46.993581 1929 log.go:172] (0xc0009b1080) Data frame received for 1\nI0501 00:43:46.993607 1929 log.go:172] (0xc000a5e0a0) (1) Data frame handling\nI0501 00:43:46.993624 1929 log.go:172] (0xc000a5e0a0) (1) Data frame sent\nI0501 00:43:46.993761 1929 log.go:172] (0xc0009b1080) (0xc000a5e0a0) Stream removed, broadcasting: 1\nI0501 00:43:46.993803 1929 log.go:172] (0xc0009b1080) Go away received\nI0501 00:43:46.994220 1929 log.go:172] (0xc0009b1080) (0xc000a5e0a0) Stream removed, broadcasting: 1\nI0501 00:43:46.994243 1929 log.go:172] (0xc0009b1080) (0xc000696000) Stream removed, broadcasting: 3\nI0501 00:43:46.994258 1929 log.go:172] (0xc0009b1080) (0xc000628640) Stream removed, broadcasting: 5\n" May 1 00:43:46.999: INFO: stdout: "" May 1 00:43:47.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8355 execpod-affinitydrxkr -- /bin/sh -x -c nc -zv -t -w 2 10.106.246.152 80' May 1 00:43:47.206: INFO: stderr: "I0501 00:43:47.129541 1949 log.go:172] (0xc0009f31e0) (0xc000b56460) Create stream\nI0501 00:43:47.129606 1949 log.go:172] (0xc0009f31e0) (0xc000b56460) Stream added, broadcasting: 1\nI0501 00:43:47.134874 1949 log.go:172] (0xc0009f31e0) Reply frame received for 1\nI0501 00:43:47.134943 1949 log.go:172] (0xc0009f31e0) (0xc000452280) Create stream\nI0501 00:43:47.134969 1949 log.go:172] (0xc0009f31e0) (0xc000452280) Stream added, broadcasting: 3\nI0501 00:43:47.135916 1949 log.go:172] (0xc0009f31e0) Reply frame received for 3\nI0501 00:43:47.135962 1949 log.go:172] (0xc0009f31e0) (0xc000453220) Create stream\nI0501 00:43:47.135975 1949 log.go:172] (0xc0009f31e0) (0xc000453220) Stream added, broadcasting: 5\nI0501 00:43:47.136911 1949 log.go:172] (0xc0009f31e0) Reply frame received for 5\nI0501 00:43:47.198375 1949 log.go:172] (0xc0009f31e0) Data frame received for 5\nI0501 00:43:47.198427 1949 log.go:172] (0xc000453220) (5) Data frame handling\nI0501 00:43:47.198449 1949 log.go:172] (0xc000453220) (5) Data frame sent\nI0501 00:43:47.198473 1949 log.go:172] (0xc0009f31e0) Data frame received for 5\nI0501 00:43:47.198491 1949 log.go:172] (0xc000453220) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.246.152 80\nConnection to 10.106.246.152 80 port [tcp/http] succeeded!\nI0501 00:43:47.198556 1949 log.go:172] (0xc0009f31e0) Data frame received for 3\nI0501 00:43:47.198581 1949 log.go:172] (0xc000452280) (3) Data frame handling\nI0501 00:43:47.200230 1949 log.go:172] (0xc0009f31e0) Data frame received for 1\nI0501 00:43:47.200254 1949 log.go:172] (0xc000b56460) (1) Data frame handling\nI0501 00:43:47.200275 1949 log.go:172] (0xc000b56460) (1) Data frame sent\nI0501 00:43:47.200293 1949 log.go:172] (0xc0009f31e0) (0xc000b56460) Stream removed, broadcasting: 1\nI0501 00:43:47.200376 1949 log.go:172] (0xc0009f31e0) Go away received\nI0501 00:43:47.200731 1949 log.go:172] (0xc0009f31e0) (0xc000b56460) Stream removed, broadcasting: 1\nI0501 00:43:47.200760 1949 log.go:172] (0xc0009f31e0) (0xc000452280) Stream removed, broadcasting: 3\nI0501 00:43:47.200777 1949 log.go:172] (0xc0009f31e0) (0xc000453220) Stream removed, broadcasting: 5\n" May 1 00:43:47.206: INFO: stdout: "" May 1 00:43:47.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8355 execpod-affinitydrxkr -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.246.152:80/ ; done' May 1 00:43:47.539: INFO: stderr: "I0501 00:43:47.355118 1971 log.go:172] (0xc000b4f1e0) (0xc000bdc640) Create stream\nI0501 00:43:47.355185 1971 log.go:172] (0xc000b4f1e0) (0xc000bdc640) Stream added, broadcasting: 1\nI0501 00:43:47.357911 1971 log.go:172] (0xc000b4f1e0) Reply frame received for 1\nI0501 00:43:47.357953 1971 log.go:172] (0xc000b4f1e0) (0xc000b88460) Create stream\nI0501 00:43:47.357981 1971 log.go:172] (0xc000b4f1e0) (0xc000b88460) Stream added, broadcasting: 3\nI0501 00:43:47.359003 1971 log.go:172] (0xc000b4f1e0) Reply frame received for 3\nI0501 00:43:47.359062 1971 log.go:172] (0xc000b4f1e0) (0xc0009063c0) Create stream\nI0501 00:43:47.359093 1971 log.go:172] (0xc000b4f1e0) (0xc0009063c0) Stream added, broadcasting: 5\nI0501 00:43:47.359961 1971 log.go:172] (0xc000b4f1e0) Reply frame received for 5\nI0501 00:43:47.455686 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.455705 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\nI0501 00:43:47.455712 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.455723 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.455727 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.455734 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.460692 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.460720 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.460747 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.461256 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.461291 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.461312 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.461346 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.461363 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\nI0501 00:43:47.461377 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.465250 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.465262 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.465267 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.465786 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.465822 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.465854 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.465879 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.465894 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\nI0501 00:43:47.465912 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\nI0501 00:43:47.465932 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.465948 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.465978 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\nI0501 00:43:47.468987 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.469009 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.469028 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.469787 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.469828 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\nI0501 00:43:47.469847 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\nI0501 00:43:47.469858 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.469877 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.469921 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.469953 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.469969 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.469988 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\nI0501 00:43:47.473604 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.473632 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.473659 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.474013 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.474043 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.474079 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.474101 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.474135 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\nI0501 00:43:47.474173 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.479733 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.479756 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.479785 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.480489 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.480516 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.480528 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.480565 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.480592 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\nI0501 00:43:47.480607 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.484971 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.485080 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.485300 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.485649 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.485678 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\nI0501 00:43:47.485703 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\nI0501 00:43:47.485720 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.485732 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.485779 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\nI0501 00:43:47.485912 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.485936 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.485954 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.489687 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.489705 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.489717 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.490362 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.490381 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\nI0501 00:43:47.490393 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.490403 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.490415 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.490440 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.494215 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.494234 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.494248 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.495052 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.495076 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.495097 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.495109 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.495120 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\nI0501 00:43:47.495135 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.499791 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.499811 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.499826 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.500159 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.500173 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\nI0501 00:43:47.500195 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\nI0501 00:43:47.500213 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.500226 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.500242 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\nI0501 00:43:47.500369 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.500395 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.500406 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.505448 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.505466 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.505478 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.505821 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.505853 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.505876 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.505910 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.505926 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\nI0501 00:43:47.505957 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.509749 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.509762 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.509774 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.510163 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.510181 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\nI0501 00:43:47.510197 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\nI0501 00:43:47.510218 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.510230 1971 log.go:172] (0xc000b88460) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.510243 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.513581 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.513594 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.513604 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.514003 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.514032 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\nI0501 00:43:47.514047 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.514061 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.514077 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.514093 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.518184 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.518212 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.518247 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.518447 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.518463 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.518470 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.518480 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.518486 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\nI0501 00:43:47.518492 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\nI0501 00:43:47.518498 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.518504 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.518515 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\nI0501 00:43:47.522486 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.522507 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.522533 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.522925 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.522938 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\nI0501 00:43:47.522949 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\nI0501 00:43:47.522956 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.522963 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.522988 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\nI0501 00:43:47.523045 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.523063 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.523074 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.526930 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.526943 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.526952 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.527233 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.527254 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\nI0501 00:43:47.527270 1971 log.go:172] (0xc0009063c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.527300 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.527319 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.527327 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.530991 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.531013 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.531031 1971 log.go:172] (0xc000b88460) (3) Data frame sent\nI0501 00:43:47.531730 1971 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0501 00:43:47.531772 1971 log.go:172] (0xc0009063c0) (5) Data frame handling\nI0501 00:43:47.531797 1971 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0501 00:43:47.531823 1971 log.go:172] (0xc000b88460) (3) Data frame handling\nI0501 00:43:47.533620 1971 log.go:172] (0xc000b4f1e0) Data frame received for 1\nI0501 00:43:47.533639 1971 log.go:172] (0xc000bdc640) (1) Data frame handling\nI0501 00:43:47.533654 1971 log.go:172] (0xc000bdc640) (1) Data frame sent\nI0501 00:43:47.533665 1971 log.go:172] (0xc000b4f1e0) (0xc000bdc640) Stream removed, broadcasting: 1\nI0501 00:43:47.533676 1971 log.go:172] (0xc000b4f1e0) Go away received\nI0501 00:43:47.534225 1971 log.go:172] (0xc000b4f1e0) (0xc000bdc640) Stream removed, broadcasting: 1\nI0501 00:43:47.534265 1971 log.go:172] (0xc000b4f1e0) (0xc000b88460) Stream removed, broadcasting: 3\nI0501 00:43:47.534292 1971 log.go:172] (0xc000b4f1e0) (0xc0009063c0) Stream removed, broadcasting: 5\n" May 1 00:43:47.540: INFO: stdout: "\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-hgpsr\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-2mzvn\naffinity-clusterip-transition-2mzvn\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-2mzvn\naffinity-clusterip-transition-2mzvn\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-2mzvn\naffinity-clusterip-transition-5l6rb" May 1 00:43:47.540: INFO: Received response from host: May 1 00:43:47.540: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.540: INFO: Received response from host: affinity-clusterip-transition-hgpsr May 1 00:43:47.540: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.540: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.540: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.540: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.540: INFO: Received response from host: affinity-clusterip-transition-2mzvn May 1 00:43:47.540: INFO: Received response from host: affinity-clusterip-transition-2mzvn May 1 00:43:47.540: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.540: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.540: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.540: INFO: Received response from host: affinity-clusterip-transition-2mzvn May 1 00:43:47.540: INFO: Received response from host: affinity-clusterip-transition-2mzvn May 1 00:43:47.540: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.540: INFO: Received response from host: affinity-clusterip-transition-2mzvn May 1 00:43:47.540: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.548: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8355 execpod-affinitydrxkr -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.246.152:80/ ; done' May 1 00:43:47.858: INFO: stderr: "I0501 00:43:47.679223 1991 log.go:172] (0xc00010e2c0) (0xc0004d2d20) Create stream\nI0501 00:43:47.679281 1991 log.go:172] (0xc00010e2c0) (0xc0004d2d20) Stream added, broadcasting: 1\nI0501 00:43:47.684181 1991 log.go:172] (0xc00010e2c0) Reply frame received for 1\nI0501 00:43:47.684234 1991 log.go:172] (0xc00010e2c0) (0xc0004c0460) Create stream\nI0501 00:43:47.684248 1991 log.go:172] (0xc00010e2c0) (0xc0004c0460) Stream added, broadcasting: 3\nI0501 00:43:47.685703 1991 log.go:172] (0xc00010e2c0) Reply frame received for 3\nI0501 00:43:47.685755 1991 log.go:172] (0xc00010e2c0) (0xc0005370e0) Create stream\nI0501 00:43:47.685772 1991 log.go:172] (0xc00010e2c0) (0xc0005370e0) Stream added, broadcasting: 5\nI0501 00:43:47.686628 1991 log.go:172] (0xc00010e2c0) Reply frame received for 5\nI0501 00:43:47.749956 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.750025 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\nI0501 00:43:47.750047 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.750090 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.750117 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.750143 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.755354 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.755378 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.755398 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.755906 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.755946 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\nI0501 00:43:47.755966 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.755989 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.756055 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.756088 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.761723 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.761742 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.761755 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.762293 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.762323 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\nI0501 00:43:47.762337 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.762356 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.762382 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.762399 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.765740 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.765781 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\nI0501 00:43:47.765794 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.765810 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.765820 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.765837 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.765938 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.765961 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.765985 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.769652 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.769675 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.769691 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.770049 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.770077 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.770089 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.770113 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.770149 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\nI0501 00:43:47.770185 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.777838 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.777850 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.777856 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.778516 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.778525 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\nI0501 00:43:47.778531 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.778749 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.778775 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.778786 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.785301 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.785331 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.785351 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.785906 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.785921 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\nI0501 00:43:47.785932 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\nI0501 00:43:47.785945 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.785960 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.785968 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.790373 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.790390 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.790418 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.790951 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.790971 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\nI0501 00:43:47.790984 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\nI0501 00:43:47.790993 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.791003 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.791026 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\nI0501 00:43:47.791040 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.791051 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.791080 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.796217 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.796238 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.796249 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.796947 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.796959 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.796975 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.797003 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\nI0501 00:43:47.797022 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.797038 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.802321 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.802345 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.802362 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.803138 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.803174 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\nI0501 00:43:47.803197 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.803290 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.803313 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.803332 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.810786 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.810808 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.810825 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.811815 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.811835 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.811853 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\nI0501 00:43:47.811860 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.811870 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.811885 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.816481 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.816495 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.816513 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.817050 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.817081 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\nI0501 00:43:47.817100 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0501 00:43:47.817307 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.817361 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\nI0501 00:43:47.817388 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\nI0501 00:43:47.817407 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.817443 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\n http://10.106.246.152:80/\nI0501 00:43:47.817477 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.823172 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.823207 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.823236 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.823458 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.823485 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\nI0501 00:43:47.823499 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\nI0501 00:43:47.823511 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.823520 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.823542 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\nI0501 00:43:47.823554 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.823563 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.823579 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.828255 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.828289 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.828314 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.828692 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.828723 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.828744 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.828763 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.828783 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\nI0501 00:43:47.828803 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.833953 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.833972 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.833981 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.834432 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.834456 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.834482 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\nI0501 00:43:47.834499 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.834526 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.834539 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.839303 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.839331 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.839372 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.839901 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.839920 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.246.152:80/\nI0501 00:43:47.839933 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.839951 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.839969 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.839989 1991 log.go:172] (0xc0005370e0) (5) Data frame sent\nI0501 00:43:47.844579 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.844606 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.844624 1991 log.go:172] (0xc0004c0460) (3) Data frame sent\nI0501 00:43:47.845873 1991 log.go:172] (0xc00010e2c0) Data frame received for 3\nI0501 00:43:47.845900 1991 log.go:172] (0xc0004c0460) (3) Data frame handling\nI0501 00:43:47.845968 1991 log.go:172] (0xc00010e2c0) Data frame received for 5\nI0501 00:43:47.845992 1991 log.go:172] (0xc0005370e0) (5) Data frame handling\nI0501 00:43:47.847531 1991 log.go:172] (0xc00010e2c0) Data frame received for 1\nI0501 00:43:47.847619 1991 log.go:172] (0xc0004d2d20) (1) Data frame handling\nI0501 00:43:47.847710 1991 log.go:172] (0xc0004d2d20) (1) Data frame sent\nI0501 00:43:47.847791 1991 log.go:172] (0xc00010e2c0) (0xc0004d2d20) Stream removed, broadcasting: 1\nI0501 00:43:47.847836 1991 log.go:172] (0xc00010e2c0) Go away received\nI0501 00:43:47.848383 1991 log.go:172] (0xc00010e2c0) (0xc0004d2d20) Stream removed, broadcasting: 1\nI0501 00:43:47.848404 1991 log.go:172] (0xc00010e2c0) (0xc0004c0460) Stream removed, broadcasting: 3\nI0501 00:43:47.848414 1991 log.go:172] (0xc00010e2c0) (0xc0005370e0) Stream removed, broadcasting: 5\n" May 1 00:43:47.858: INFO: stdout: "\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb\naffinity-clusterip-transition-5l6rb" May 1 00:43:47.858: INFO: Received response from host: May 1 00:43:47.858: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.858: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.858: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.858: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.858: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.858: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.858: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.858: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.858: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.858: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.858: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.858: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.858: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.858: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.858: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.858: INFO: Received response from host: affinity-clusterip-transition-5l6rb May 1 00:43:47.858: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-8355, will wait for the garbage collector to delete the pods May 1 00:43:47.957: INFO: Deleting ReplicationController affinity-clusterip-transition took: 6.410047ms May 1 00:43:48.458: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 500.271383ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:43:55.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8355" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:19.560 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":290,"completed":168,"skipped":2818,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:43:55.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-074a9c17-0c8f-43e3-ae42-fde2d5b14966 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:44:01.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4223" for this suite. • [SLOW TEST:6.178 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":290,"completed":169,"skipped":2824,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:44:01.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 1 00:44:09.354: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 00:44:09.367: INFO: Pod pod-with-poststart-exec-hook still exists May 1 00:44:11.367: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 00:44:11.371: INFO: Pod pod-with-poststart-exec-hook still exists May 1 00:44:13.367: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 00:44:13.371: INFO: Pod pod-with-poststart-exec-hook still exists May 1 00:44:15.367: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 00:44:15.370: INFO: Pod pod-with-poststart-exec-hook still exists May 1 00:44:17.367: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 00:44:17.371: INFO: Pod pod-with-poststart-exec-hook still exists May 1 00:44:19.367: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 00:44:19.371: INFO: Pod pod-with-poststart-exec-hook still exists May 1 00:44:21.367: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 00:44:21.371: INFO: Pod pod-with-poststart-exec-hook still exists May 1 00:44:23.367: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 00:44:23.372: INFO: Pod pod-with-poststart-exec-hook still exists May 1 00:44:25.367: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 00:44:25.370: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:44:25.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9284" for this suite. • [SLOW TEST:24.167 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":290,"completed":170,"skipped":2826,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:44:25.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-42a285cc-a3b1-4a33-8e84-a490a38d5c6b STEP: Creating configMap with name cm-test-opt-upd-fc3fa2da-a625-40ec-a1a4-c59202f3625a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-42a285cc-a3b1-4a33-8e84-a490a38d5c6b STEP: Updating configmap cm-test-opt-upd-fc3fa2da-a625-40ec-a1a4-c59202f3625a STEP: Creating configMap with name cm-test-opt-create-9a04697b-6dd3-4bb5-8a17-dc4089197af9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:44:33.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1566" for this suite. • [SLOW TEST:8.228 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":290,"completed":171,"skipped":2832,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:44:33.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 1 00:46:34.244: INFO: Successfully updated pod "var-expansion-92c9382f-7436-4dcc-b493-968f224dffa0" STEP: waiting for pod running STEP: deleting the pod gracefully May 1 00:46:36.320: INFO: Deleting pod "var-expansion-92c9382f-7436-4dcc-b493-968f224dffa0" in namespace "var-expansion-6816" May 1 00:46:36.325: INFO: Wait up to 5m0s for pod "var-expansion-92c9382f-7436-4dcc-b493-968f224dffa0" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:47:10.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6816" for this suite. • [SLOW TEST:156.761 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":290,"completed":172,"skipped":2845,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:47:10.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0501 00:47:11.495354 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 00:47:11.495: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:47:11.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9298" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":290,"completed":173,"skipped":2854,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:47:11.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-387 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-387 STEP: Deleting pre-stop pod May 1 00:47:26.648: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:47:26.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-387" for this suite. • [SLOW TEST:15.197 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":290,"completed":174,"skipped":2858,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:47:26.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:47:39.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8850" for this suite. • [SLOW TEST:13.228 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":290,"completed":175,"skipped":2879,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:47:39.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 1 00:47:40.029: INFO: Waiting up to 5m0s for pod "var-expansion-9e551236-f783-4c82-935e-6ff8e21cafbb" in namespace "var-expansion-243" to be "Succeeded or Failed" May 1 00:47:40.045: INFO: Pod "var-expansion-9e551236-f783-4c82-935e-6ff8e21cafbb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.438156ms May 1 00:47:42.231: INFO: Pod "var-expansion-9e551236-f783-4c82-935e-6ff8e21cafbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20169736s May 1 00:47:44.236: INFO: Pod "var-expansion-9e551236-f783-4c82-935e-6ff8e21cafbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.206026906s STEP: Saw pod success May 1 00:47:44.236: INFO: Pod "var-expansion-9e551236-f783-4c82-935e-6ff8e21cafbb" satisfied condition "Succeeded or Failed" May 1 00:47:44.239: INFO: Trying to get logs from node latest-worker2 pod var-expansion-9e551236-f783-4c82-935e-6ff8e21cafbb container dapi-container: STEP: delete the pod May 1 00:47:44.312: INFO: Waiting for pod var-expansion-9e551236-f783-4c82-935e-6ff8e21cafbb to disappear May 1 00:47:44.320: INFO: Pod var-expansion-9e551236-f783-4c82-935e-6ff8e21cafbb no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:47:44.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-243" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":290,"completed":176,"skipped":2886,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:47:44.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 1 00:47:44.406: INFO: Waiting up to 5m0s for pod "client-containers-c9d8f365-1e7b-4154-94ce-79de6dd20fac" in namespace "containers-280" to be "Succeeded or Failed" May 1 00:47:44.442: INFO: Pod "client-containers-c9d8f365-1e7b-4154-94ce-79de6dd20fac": Phase="Pending", Reason="", readiness=false. Elapsed: 35.742001ms May 1 00:47:46.447: INFO: Pod "client-containers-c9d8f365-1e7b-4154-94ce-79de6dd20fac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040280064s May 1 00:47:48.450: INFO: Pod "client-containers-c9d8f365-1e7b-4154-94ce-79de6dd20fac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043532871s STEP: Saw pod success May 1 00:47:48.450: INFO: Pod "client-containers-c9d8f365-1e7b-4154-94ce-79de6dd20fac" satisfied condition "Succeeded or Failed" May 1 00:47:48.453: INFO: Trying to get logs from node latest-worker2 pod client-containers-c9d8f365-1e7b-4154-94ce-79de6dd20fac container test-container: STEP: delete the pod May 1 00:47:48.509: INFO: Waiting for pod client-containers-c9d8f365-1e7b-4154-94ce-79de6dd20fac to disappear May 1 00:47:48.532: INFO: Pod client-containers-c9d8f365-1e7b-4154-94ce-79de6dd20fac no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:47:48.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-280" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":290,"completed":177,"skipped":2923,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:47:48.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:47:52.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4172" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":178,"skipped":2937,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:47:52.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 1 00:47:52.792: INFO: Waiting up to 5m0s for pod "pod-a83fd85b-4f78-42fd-9610-b433a887778b" in namespace "emptydir-5086" to be "Succeeded or Failed" May 1 00:47:52.819: INFO: Pod "pod-a83fd85b-4f78-42fd-9610-b433a887778b": Phase="Pending", Reason="", readiness=false. Elapsed: 27.256982ms May 1 00:47:54.987: INFO: Pod "pod-a83fd85b-4f78-42fd-9610-b433a887778b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194593945s May 1 00:47:56.991: INFO: Pod "pod-a83fd85b-4f78-42fd-9610-b433a887778b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199092234s May 1 00:47:58.996: INFO: Pod "pod-a83fd85b-4f78-42fd-9610-b433a887778b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.203908112s STEP: Saw pod success May 1 00:47:58.996: INFO: Pod "pod-a83fd85b-4f78-42fd-9610-b433a887778b" satisfied condition "Succeeded or Failed" May 1 00:47:59.000: INFO: Trying to get logs from node latest-worker pod pod-a83fd85b-4f78-42fd-9610-b433a887778b container test-container: STEP: delete the pod May 1 00:47:59.035: INFO: Waiting for pod pod-a83fd85b-4f78-42fd-9610-b433a887778b to disappear May 1 00:47:59.051: INFO: Pod pod-a83fd85b-4f78-42fd-9610-b433a887778b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:47:59.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5086" for this suite. • [SLOW TEST:6.355 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":179,"skipped":2939,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:47:59.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1575.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-1575.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1575.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-1575.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1575.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1575.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-1575.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1575.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-1575.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1575.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 00:48:07.179: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:07.183: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:07.186: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:07.215: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:07.225: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:07.228: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:07.231: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:07.234: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:07.239: INFO: Lookups using dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1575.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1575.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local jessie_udp@dns-test-service-2.dns-1575.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1575.svc.cluster.local] May 1 00:48:12.245: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:12.248: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:12.252: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:12.255: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:12.264: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:12.267: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:12.270: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:12.273: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:12.279: INFO: Lookups using dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1575.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1575.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local jessie_udp@dns-test-service-2.dns-1575.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1575.svc.cluster.local] May 1 00:48:17.250: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:17.254: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:17.256: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:17.259: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:17.268: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:17.271: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:17.274: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:17.276: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:17.308: INFO: Lookups using dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1575.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1575.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local jessie_udp@dns-test-service-2.dns-1575.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1575.svc.cluster.local] May 1 00:48:22.244: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:22.249: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:22.252: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:22.256: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:22.266: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:22.269: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:22.272: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:22.275: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:22.282: INFO: Lookups using dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1575.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1575.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local jessie_udp@dns-test-service-2.dns-1575.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1575.svc.cluster.local] May 1 00:48:27.244: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:27.248: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:27.251: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:27.253: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:27.261: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:27.264: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:27.267: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:27.270: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:27.276: INFO: Lookups using dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1575.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1575.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local jessie_udp@dns-test-service-2.dns-1575.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1575.svc.cluster.local] May 1 00:48:32.245: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:32.250: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:32.253: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:32.256: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:32.267: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:32.270: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:32.273: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:32.276: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1575.svc.cluster.local from pod dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878: the server could not find the requested resource (get pods dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878) May 1 00:48:32.283: INFO: Lookups using dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1575.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1575.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1575.svc.cluster.local jessie_udp@dns-test-service-2.dns-1575.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1575.svc.cluster.local] May 1 00:48:37.276: INFO: DNS probes using dns-1575/dns-test-b00eb31e-2ebe-4021-9b5f-0220d53f0878 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:48:37.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1575" for this suite. • [SLOW TEST:38.732 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":290,"completed":180,"skipped":2998,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:48:37.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-cb44bf2c-e10e-45e4-a62c-c635c3fec379 STEP: Creating a pod to test consume secrets May 1 00:48:37.996: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-39cb3616-0b46-474a-92fc-c0522a690c53" in namespace "projected-2557" to be "Succeeded or Failed" May 1 00:48:38.000: INFO: Pod "pod-projected-secrets-39cb3616-0b46-474a-92fc-c0522a690c53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292221ms May 1 00:48:40.005: INFO: Pod "pod-projected-secrets-39cb3616-0b46-474a-92fc-c0522a690c53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009301518s May 1 00:48:42.010: INFO: Pod "pod-projected-secrets-39cb3616-0b46-474a-92fc-c0522a690c53": Phase="Running", Reason="", readiness=true. Elapsed: 4.013904943s May 1 00:48:44.014: INFO: Pod "pod-projected-secrets-39cb3616-0b46-474a-92fc-c0522a690c53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018290171s STEP: Saw pod success May 1 00:48:44.014: INFO: Pod "pod-projected-secrets-39cb3616-0b46-474a-92fc-c0522a690c53" satisfied condition "Succeeded or Failed" May 1 00:48:44.017: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-39cb3616-0b46-474a-92fc-c0522a690c53 container projected-secret-volume-test: STEP: delete the pod May 1 00:48:44.059: INFO: Waiting for pod pod-projected-secrets-39cb3616-0b46-474a-92fc-c0522a690c53 to disappear May 1 00:48:44.070: INFO: Pod pod-projected-secrets-39cb3616-0b46-474a-92fc-c0522a690c53 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:48:44.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2557" for this suite. • [SLOW TEST:6.308 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":181,"skipped":3046,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:48:44.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 1 00:48:44.174: INFO: Waiting up to 5m0s for pod "pod-8dcf7673-17b9-4d21-a54e-a9271166dc47" in namespace "emptydir-8503" to be "Succeeded or Failed" May 1 00:48:44.178: INFO: Pod "pod-8dcf7673-17b9-4d21-a54e-a9271166dc47": Phase="Pending", Reason="", readiness=false. Elapsed: 3.657951ms May 1 00:48:46.293: INFO: Pod "pod-8dcf7673-17b9-4d21-a54e-a9271166dc47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118309202s May 1 00:48:48.298: INFO: Pod "pod-8dcf7673-17b9-4d21-a54e-a9271166dc47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123207244s STEP: Saw pod success May 1 00:48:48.298: INFO: Pod "pod-8dcf7673-17b9-4d21-a54e-a9271166dc47" satisfied condition "Succeeded or Failed" May 1 00:48:48.301: INFO: Trying to get logs from node latest-worker2 pod pod-8dcf7673-17b9-4d21-a54e-a9271166dc47 container test-container: STEP: delete the pod May 1 00:48:48.362: INFO: Waiting for pod pod-8dcf7673-17b9-4d21-a54e-a9271166dc47 to disappear May 1 00:48:48.406: INFO: Pod pod-8dcf7673-17b9-4d21-a54e-a9271166dc47 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:48:48.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8503" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":182,"skipped":3051,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:48:48.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-242.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-242.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-242.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-242.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-242.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-242.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 00:48:54.644: INFO: DNS probes using dns-242/dns-test-c5b1056d-b7d8-4ce9-a2ba-76fd7d6254ca succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:48:54.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-242" for this suite. • [SLOW TEST:6.462 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":290,"completed":183,"skipped":3054,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:48:54.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-12c9a154-cc67-43bf-9323-762ccefbf18c STEP: Creating a pod to test consume configMaps May 1 00:48:55.303: INFO: Waiting up to 5m0s for pod "pod-configmaps-480f26fe-7ef1-4e8b-969b-4ef8aa40a43b" in namespace "configmap-3024" to be "Succeeded or Failed" May 1 00:48:55.306: INFO: Pod "pod-configmaps-480f26fe-7ef1-4e8b-969b-4ef8aa40a43b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.742122ms May 1 00:48:57.310: INFO: Pod "pod-configmaps-480f26fe-7ef1-4e8b-969b-4ef8aa40a43b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007201826s May 1 00:48:59.314: INFO: Pod "pod-configmaps-480f26fe-7ef1-4e8b-969b-4ef8aa40a43b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011181522s STEP: Saw pod success May 1 00:48:59.314: INFO: Pod "pod-configmaps-480f26fe-7ef1-4e8b-969b-4ef8aa40a43b" satisfied condition "Succeeded or Failed" May 1 00:48:59.318: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-480f26fe-7ef1-4e8b-969b-4ef8aa40a43b container configmap-volume-test: STEP: delete the pod May 1 00:48:59.492: INFO: Waiting for pod pod-configmaps-480f26fe-7ef1-4e8b-969b-4ef8aa40a43b to disappear May 1 00:48:59.526: INFO: Pod pod-configmaps-480f26fe-7ef1-4e8b-969b-4ef8aa40a43b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:48:59.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3024" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":290,"completed":184,"skipped":3054,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:48:59.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:49:06.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8310" for this suite. • [SLOW TEST:7.128 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":290,"completed":185,"skipped":3086,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:49:06.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 1 00:49:06.758: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5633 /api/v1/namespaces/watch-5633/configmaps/e2e-watch-test-watch-closed ac3ca3ce-f7f3-4904-9552-624af79a0b10 461818 0 2020-05-01 00:49:06 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-01 00:49:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 1 00:49:06.758: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5633 /api/v1/namespaces/watch-5633/configmaps/e2e-watch-test-watch-closed ac3ca3ce-f7f3-4904-9552-624af79a0b10 461819 0 2020-05-01 00:49:06 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-01 00:49:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 1 00:49:06.768: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5633 /api/v1/namespaces/watch-5633/configmaps/e2e-watch-test-watch-closed ac3ca3ce-f7f3-4904-9552-624af79a0b10 461820 0 2020-05-01 00:49:06 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-01 00:49:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 1 00:49:06.768: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5633 /api/v1/namespaces/watch-5633/configmaps/e2e-watch-test-watch-closed ac3ca3ce-f7f3-4904-9552-624af79a0b10 461821 0 2020-05-01 00:49:06 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-01 00:49:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:49:06.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5633" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":290,"completed":186,"skipped":3114,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:49:06.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 1 00:49:12.932: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7520 PodName:pod-sharedvolume-e4270b26-bced-48be-8bfc-1114f50f6d28 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 00:49:12.932: INFO: >>> kubeConfig: /root/.kube/config I0501 00:49:12.967862 7 log.go:172] (0xc00259b130) (0xc0013e2500) Create stream I0501 00:49:12.967893 7 log.go:172] (0xc00259b130) (0xc0013e2500) Stream added, broadcasting: 1 I0501 00:49:12.971802 7 log.go:172] (0xc00259b130) Reply frame received for 1 I0501 00:49:12.971836 7 log.go:172] (0xc00259b130) (0xc001449f40) Create stream I0501 00:49:12.971846 7 log.go:172] (0xc00259b130) (0xc001449f40) Stream added, broadcasting: 3 I0501 00:49:12.973970 7 log.go:172] (0xc00259b130) Reply frame received for 3 I0501 00:49:12.973998 7 log.go:172] (0xc00259b130) (0xc0013e25a0) Create stream I0501 00:49:12.974006 7 log.go:172] (0xc00259b130) (0xc0013e25a0) Stream added, broadcasting: 5 I0501 00:49:12.974815 7 log.go:172] (0xc00259b130) Reply frame received for 5 I0501 00:49:13.044123 7 log.go:172] (0xc00259b130) Data frame received for 3 I0501 00:49:13.044169 7 log.go:172] (0xc001449f40) (3) Data frame handling I0501 00:49:13.044209 7 log.go:172] (0xc00259b130) Data frame received for 5 I0501 00:49:13.044274 7 log.go:172] (0xc0013e25a0) (5) Data frame handling I0501 00:49:13.044310 7 log.go:172] (0xc001449f40) (3) Data frame sent I0501 00:49:13.044327 7 log.go:172] (0xc00259b130) Data frame received for 3 I0501 00:49:13.044339 7 log.go:172] (0xc001449f40) (3) Data frame handling I0501 00:49:13.046064 7 log.go:172] (0xc00259b130) Data frame received for 1 I0501 00:49:13.046092 7 log.go:172] (0xc0013e2500) (1) Data frame handling I0501 00:49:13.046111 7 log.go:172] (0xc0013e2500) (1) Data frame sent I0501 00:49:13.046142 7 log.go:172] (0xc00259b130) (0xc0013e2500) Stream removed, broadcasting: 1 I0501 00:49:13.046221 7 log.go:172] (0xc00259b130) Go away received I0501 00:49:13.046343 7 log.go:172] (0xc00259b130) (0xc0013e2500) Stream removed, broadcasting: 1 I0501 00:49:13.046420 7 log.go:172] (0xc00259b130) (0xc001449f40) Stream removed, broadcasting: 3 I0501 00:49:13.046490 7 log.go:172] (0xc00259b130) (0xc0013e25a0) Stream removed, broadcasting: 5 May 1 00:49:13.046: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:49:13.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7520" for this suite. • [SLOW TEST:6.280 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":290,"completed":187,"skipped":3114,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:49:13.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 1 00:49:17.139: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:49:17.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3517" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":290,"completed":188,"skipped":3115,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:49:17.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-de218a60-fdfa-4d37-8df2-1236fedf1265 STEP: Creating a pod to test consume configMaps May 1 00:49:17.260: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ef56fd56-85e0-4eb8-b02d-4164059e61ed" in namespace "projected-9869" to be "Succeeded or Failed" May 1 00:49:17.276: INFO: Pod "pod-projected-configmaps-ef56fd56-85e0-4eb8-b02d-4164059e61ed": Phase="Pending", Reason="", readiness=false. Elapsed: 16.402015ms May 1 00:49:19.281: INFO: Pod "pod-projected-configmaps-ef56fd56-85e0-4eb8-b02d-4164059e61ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021138571s May 1 00:49:21.285: INFO: Pod "pod-projected-configmaps-ef56fd56-85e0-4eb8-b02d-4164059e61ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02560678s STEP: Saw pod success May 1 00:49:21.285: INFO: Pod "pod-projected-configmaps-ef56fd56-85e0-4eb8-b02d-4164059e61ed" satisfied condition "Succeeded or Failed" May 1 00:49:21.288: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-ef56fd56-85e0-4eb8-b02d-4164059e61ed container projected-configmap-volume-test: STEP: delete the pod May 1 00:49:21.327: INFO: Waiting for pod pod-projected-configmaps-ef56fd56-85e0-4eb8-b02d-4164059e61ed to disappear May 1 00:49:21.342: INFO: Pod pod-projected-configmaps-ef56fd56-85e0-4eb8-b02d-4164059e61ed no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:49:21.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9869" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":189,"skipped":3115,"failed":0} S ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:49:21.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:49:21.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-770" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":290,"completed":190,"skipped":3116,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:49:21.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 1 00:49:21.530: INFO: PodSpec: initContainers in spec.initContainers May 1 00:50:14.680: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0d8139c4-6d65-456a-bbaf-128425d1c767", GenerateName:"", Namespace:"init-container-8811", SelfLink:"/api/v1/namespaces/init-container-8811/pods/pod-init-0d8139c4-6d65-456a-bbaf-128425d1c767", UID:"f6d5e357-926f-4663-b183-231141ea84e4", ResourceVersion:"462148", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723890961, loc:(*time.Location)(0x7c48300)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"530569828"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001e35f40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001e35f60)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001e35f80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001e35fe0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-vlzz7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005401300), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vlzz7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vlzz7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vlzz7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004a611c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0024abea0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004a612d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004a612f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004a612f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004a612fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890961, loc:(*time.Location)(0x7c48300)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890961, loc:(*time.Location)(0x7c48300)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890961, loc:(*time.Location)(0x7c48300)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723890961, loc:(*time.Location)(0x7c48300)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.1.158", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.158"}}, StartTime:(*v1.Time)(0xc002526020), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0025260e0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0024abf80)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://0800c5709f4cb63bbf96d54249f048ea662dc3e4be26f18455243d0ef72880b5", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002526140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002526080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc004a6137f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:50:14.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8811" for this suite. • [SLOW TEST:53.263 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":290,"completed":191,"skipped":3135,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:50:14.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-865e79d1-50df-4288-b5ea-2c7bb54cd0bd STEP: Creating a pod to test consume configMaps May 1 00:50:14.876: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7e14770a-fae8-4e03-92e5-d7dc79a20b92" in namespace "projected-8591" to be "Succeeded or Failed" May 1 00:50:14.940: INFO: Pod "pod-projected-configmaps-7e14770a-fae8-4e03-92e5-d7dc79a20b92": Phase="Pending", Reason="", readiness=false. Elapsed: 63.432795ms May 1 00:50:16.994: INFO: Pod "pod-projected-configmaps-7e14770a-fae8-4e03-92e5-d7dc79a20b92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117826274s May 1 00:50:18.999: INFO: Pod "pod-projected-configmaps-7e14770a-fae8-4e03-92e5-d7dc79a20b92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.122761038s STEP: Saw pod success May 1 00:50:18.999: INFO: Pod "pod-projected-configmaps-7e14770a-fae8-4e03-92e5-d7dc79a20b92" satisfied condition "Succeeded or Failed" May 1 00:50:19.003: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-7e14770a-fae8-4e03-92e5-d7dc79a20b92 container projected-configmap-volume-test: STEP: delete the pod May 1 00:50:19.062: INFO: Waiting for pod pod-projected-configmaps-7e14770a-fae8-4e03-92e5-d7dc79a20b92 to disappear May 1 00:50:19.073: INFO: Pod pod-projected-configmaps-7e14770a-fae8-4e03-92e5-d7dc79a20b92 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:50:19.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8591" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":290,"completed":192,"skipped":3143,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:50:19.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 1 00:50:19.267: INFO: >>> kubeConfig: /root/.kube/config May 1 00:50:22.199: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:50:32.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9645" for this suite. • [SLOW TEST:13.717 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":290,"completed":193,"skipped":3144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:50:32.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 00:50:33.380: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 00:50:35.810: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723891033, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723891033, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723891033, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723891033, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 00:50:37.814: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723891033, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723891033, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723891033, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723891033, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 00:50:40.854: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:50:40.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8691" for this suite. STEP: Destroying namespace "webhook-8691-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.176 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":290,"completed":194,"skipped":3201,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:50:41.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 1 00:50:41.094: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 1 00:50:41.131: INFO: Number of nodes with available pods: 0 May 1 00:50:41.131: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 1 00:50:41.202: INFO: Number of nodes with available pods: 0 May 1 00:50:41.202: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:50:42.336: INFO: Number of nodes with available pods: 0 May 1 00:50:42.336: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:50:43.207: INFO: Number of nodes with available pods: 0 May 1 00:50:43.207: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:50:44.222: INFO: Number of nodes with available pods: 1 May 1 00:50:44.222: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 1 00:50:44.256: INFO: Number of nodes with available pods: 1 May 1 00:50:44.256: INFO: Number of running nodes: 0, number of available pods: 1 May 1 00:50:45.262: INFO: Number of nodes with available pods: 0 May 1 00:50:45.262: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 1 00:50:45.287: INFO: Number of nodes with available pods: 0 May 1 00:50:45.287: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:50:46.290: INFO: Number of nodes with available pods: 0 May 1 00:50:46.290: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:50:47.291: INFO: Number of nodes with available pods: 0 May 1 00:50:47.291: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:50:48.291: INFO: Number of nodes with available pods: 0 May 1 00:50:48.291: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:50:49.291: INFO: Number of nodes with available pods: 0 May 1 00:50:49.291: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:50:50.297: INFO: Number of nodes with available pods: 0 May 1 00:50:50.297: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:50:51.290: INFO: Number of nodes with available pods: 0 May 1 00:50:51.290: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:50:52.291: INFO: Number of nodes with available pods: 0 May 1 00:50:52.291: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:50:53.291: INFO: Number of nodes with available pods: 0 May 1 00:50:53.291: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:50:54.291: INFO: Number of nodes with available pods: 0 May 1 00:50:54.291: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:50:55.304: INFO: Number of nodes with available pods: 0 May 1 00:50:55.304: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:50:56.290: INFO: Number of nodes with available pods: 0 May 1 00:50:56.290: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:50:57.291: INFO: Number of nodes with available pods: 0 May 1 00:50:57.291: INFO: Node latest-worker2 is running more than one daemon pod May 1 00:50:58.293: INFO: Number of nodes with available pods: 1 May 1 00:50:58.293: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2497, will wait for the garbage collector to delete the pods May 1 00:50:58.358: INFO: Deleting DaemonSet.extensions daemon-set took: 7.279579ms May 1 00:50:58.458: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.309119ms May 1 00:51:05.261: INFO: Number of nodes with available pods: 0 May 1 00:51:05.261: INFO: Number of running nodes: 0, number of available pods: 0 May 1 00:51:05.287: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2497/daemonsets","resourceVersion":"462472"},"items":null} May 1 00:51:05.289: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2497/pods","resourceVersion":"462472"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:51:05.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2497" for this suite. • [SLOW TEST:24.339 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":290,"completed":195,"skipped":3232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:51:05.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:51:05.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6897" for this suite. STEP: Destroying namespace "nspatchtest-14739933-ea59-4195-9e63-1fb04dd6b548-5983" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":290,"completed":196,"skipped":3255,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:51:05.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-04c4be4e-c457-47a4-bb35-d976c2edc205 STEP: Creating a pod to test consume secrets May 1 00:51:05.598: INFO: Waiting up to 5m0s for pod "pod-secrets-ef78dbb2-b109-4f4b-afc1-4a1db4544aaf" in namespace "secrets-2780" to be "Succeeded or Failed" May 1 00:51:05.615: INFO: Pod "pod-secrets-ef78dbb2-b109-4f4b-afc1-4a1db4544aaf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.65804ms May 1 00:51:07.629: INFO: Pod "pod-secrets-ef78dbb2-b109-4f4b-afc1-4a1db4544aaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030824379s May 1 00:51:09.633: INFO: Pod "pod-secrets-ef78dbb2-b109-4f4b-afc1-4a1db4544aaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034913373s STEP: Saw pod success May 1 00:51:09.633: INFO: Pod "pod-secrets-ef78dbb2-b109-4f4b-afc1-4a1db4544aaf" satisfied condition "Succeeded or Failed" May 1 00:51:09.636: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-ef78dbb2-b109-4f4b-afc1-4a1db4544aaf container secret-volume-test: STEP: delete the pod May 1 00:51:09.692: INFO: Waiting for pod pod-secrets-ef78dbb2-b109-4f4b-afc1-4a1db4544aaf to disappear May 1 00:51:09.698: INFO: Pod pod-secrets-ef78dbb2-b109-4f4b-afc1-4a1db4544aaf no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:51:09.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2780" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":290,"completed":197,"skipped":3334,"failed":0} SS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:51:09.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:51:09.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-8082" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":290,"completed":198,"skipped":3336,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:51:09.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 1 00:51:09.958: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e5ea760-64f1-400e-8b76-b5a81ffed8ed" in namespace "projected-9154" to be "Succeeded or Failed" May 1 00:51:09.988: INFO: Pod "downwardapi-volume-1e5ea760-64f1-400e-8b76-b5a81ffed8ed": Phase="Pending", Reason="", readiness=false. Elapsed: 29.908611ms May 1 00:51:11.992: INFO: Pod "downwardapi-volume-1e5ea760-64f1-400e-8b76-b5a81ffed8ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034064516s May 1 00:51:13.998: INFO: Pod "downwardapi-volume-1e5ea760-64f1-400e-8b76-b5a81ffed8ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039348151s STEP: Saw pod success May 1 00:51:13.998: INFO: Pod "downwardapi-volume-1e5ea760-64f1-400e-8b76-b5a81ffed8ed" satisfied condition "Succeeded or Failed" May 1 00:51:14.001: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-1e5ea760-64f1-400e-8b76-b5a81ffed8ed container client-container: STEP: delete the pod May 1 00:51:14.056: INFO: Waiting for pod downwardapi-volume-1e5ea760-64f1-400e-8b76-b5a81ffed8ed to disappear May 1 00:51:14.064: INFO: Pod downwardapi-volume-1e5ea760-64f1-400e-8b76-b5a81ffed8ed no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:51:14.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9154" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":290,"completed":199,"skipped":3340,"failed":0} S ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:51:14.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 1 00:51:14.121: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:51:18.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4146" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":290,"completed":200,"skipped":3341,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:51:18.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9265 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 1 00:51:18.293: INFO: Found 0 stateful pods, waiting for 3 May 1 00:51:28.312: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 00:51:28.312: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 00:51:28.312: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false May 1 00:51:38.298: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 00:51:38.298: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 00:51:38.298: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 1 00:51:38.309: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9265 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 1 00:51:41.439: INFO: stderr: "I0501 00:51:41.317320 2012 log.go:172] (0xc0008d8580) (0xc00053a5a0) Create stream\nI0501 00:51:41.317373 2012 log.go:172] (0xc0008d8580) (0xc00053a5a0) Stream added, broadcasting: 1\nI0501 00:51:41.320247 2012 log.go:172] (0xc0008d8580) Reply frame received for 1\nI0501 00:51:41.320276 2012 log.go:172] (0xc0008d8580) (0xc0004e4280) Create stream\nI0501 00:51:41.320283 2012 log.go:172] (0xc0008d8580) (0xc0004e4280) Stream added, broadcasting: 3\nI0501 00:51:41.321666 2012 log.go:172] (0xc0008d8580) Reply frame received for 3\nI0501 00:51:41.321708 2012 log.go:172] (0xc0008d8580) (0xc0004565a0) Create stream\nI0501 00:51:41.321726 2012 log.go:172] (0xc0008d8580) (0xc0004565a0) Stream added, broadcasting: 5\nI0501 00:51:41.322595 2012 log.go:172] (0xc0008d8580) Reply frame received for 5\nI0501 00:51:41.401719 2012 log.go:172] (0xc0008d8580) Data frame received for 5\nI0501 00:51:41.401753 2012 log.go:172] (0xc0004565a0) (5) Data frame handling\nI0501 00:51:41.401773 2012 log.go:172] (0xc0004565a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 00:51:41.432650 2012 log.go:172] (0xc0008d8580) Data frame received for 3\nI0501 00:51:41.432718 2012 log.go:172] (0xc0004e4280) (3) Data frame handling\nI0501 00:51:41.432731 2012 log.go:172] (0xc0004e4280) (3) Data frame sent\nI0501 00:51:41.432746 2012 log.go:172] (0xc0008d8580) Data frame received for 3\nI0501 00:51:41.432765 2012 log.go:172] (0xc0004e4280) (3) Data frame handling\nI0501 00:51:41.432950 2012 log.go:172] (0xc0008d8580) Data frame received for 5\nI0501 00:51:41.432997 2012 log.go:172] (0xc0004565a0) (5) Data frame handling\nI0501 00:51:41.434804 2012 log.go:172] (0xc0008d8580) Data frame received for 1\nI0501 00:51:41.434827 2012 log.go:172] (0xc00053a5a0) (1) Data frame handling\nI0501 00:51:41.434856 2012 log.go:172] (0xc00053a5a0) (1) Data frame sent\nI0501 00:51:41.434869 2012 log.go:172] (0xc0008d8580) (0xc00053a5a0) Stream removed, broadcasting: 1\nI0501 00:51:41.435092 2012 log.go:172] (0xc0008d8580) Go away received\nI0501 00:51:41.435213 2012 log.go:172] (0xc0008d8580) (0xc00053a5a0) Stream removed, broadcasting: 1\nI0501 00:51:41.435231 2012 log.go:172] (0xc0008d8580) (0xc0004e4280) Stream removed, broadcasting: 3\nI0501 00:51:41.435240 2012 log.go:172] (0xc0008d8580) (0xc0004565a0) Stream removed, broadcasting: 5\n" May 1 00:51:41.439: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 1 00:51:41.439: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 1 00:51:51.473: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 1 00:52:01.507: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9265 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 00:52:01.734: INFO: stderr: "I0501 00:52:01.642540 2042 log.go:172] (0xc0006e6bb0) (0xc0006e2dc0) Create stream\nI0501 00:52:01.642609 2042 log.go:172] (0xc0006e6bb0) (0xc0006e2dc0) Stream added, broadcasting: 1\nI0501 00:52:01.645036 2042 log.go:172] (0xc0006e6bb0) Reply frame received for 1\nI0501 00:52:01.645095 2042 log.go:172] (0xc0006e6bb0) (0xc0006d0b40) Create stream\nI0501 00:52:01.645363 2042 log.go:172] (0xc0006e6bb0) (0xc0006d0b40) Stream added, broadcasting: 3\nI0501 00:52:01.646512 2042 log.go:172] (0xc0006e6bb0) Reply frame received for 3\nI0501 00:52:01.646564 2042 log.go:172] (0xc0006e6bb0) (0xc0006e3360) Create stream\nI0501 00:52:01.646576 2042 log.go:172] (0xc0006e6bb0) (0xc0006e3360) Stream added, broadcasting: 5\nI0501 00:52:01.647642 2042 log.go:172] (0xc0006e6bb0) Reply frame received for 5\nI0501 00:52:01.726276 2042 log.go:172] (0xc0006e6bb0) Data frame received for 3\nI0501 00:52:01.726321 2042 log.go:172] (0xc0006d0b40) (3) Data frame handling\nI0501 00:52:01.726353 2042 log.go:172] (0xc0006d0b40) (3) Data frame sent\nI0501 00:52:01.726370 2042 log.go:172] (0xc0006e6bb0) Data frame received for 3\nI0501 00:52:01.726384 2042 log.go:172] (0xc0006d0b40) (3) Data frame handling\nI0501 00:52:01.726469 2042 log.go:172] (0xc0006e6bb0) Data frame received for 5\nI0501 00:52:01.726499 2042 log.go:172] (0xc0006e3360) (5) Data frame handling\nI0501 00:52:01.726541 2042 log.go:172] (0xc0006e3360) (5) Data frame sent\nI0501 00:52:01.726574 2042 log.go:172] (0xc0006e6bb0) Data frame received for 5\nI0501 00:52:01.726586 2042 log.go:172] (0xc0006e3360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0501 00:52:01.728276 2042 log.go:172] (0xc0006e6bb0) Data frame received for 1\nI0501 00:52:01.728314 2042 log.go:172] (0xc0006e2dc0) (1) Data frame handling\nI0501 00:52:01.728336 2042 log.go:172] (0xc0006e2dc0) (1) Data frame sent\nI0501 00:52:01.728361 2042 log.go:172] (0xc0006e6bb0) (0xc0006e2dc0) Stream removed, broadcasting: 1\nI0501 00:52:01.728395 2042 log.go:172] (0xc0006e6bb0) Go away received\nI0501 00:52:01.728877 2042 log.go:172] (0xc0006e6bb0) (0xc0006e2dc0) Stream removed, broadcasting: 1\nI0501 00:52:01.728903 2042 log.go:172] (0xc0006e6bb0) (0xc0006d0b40) Stream removed, broadcasting: 3\nI0501 00:52:01.728916 2042 log.go:172] (0xc0006e6bb0) (0xc0006e3360) Stream removed, broadcasting: 5\n" May 1 00:52:01.735: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 1 00:52:01.735: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 1 00:52:11.756: INFO: Waiting for StatefulSet statefulset-9265/ss2 to complete update May 1 00:52:11.756: INFO: Waiting for Pod statefulset-9265/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 1 00:52:11.756: INFO: Waiting for Pod statefulset-9265/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 1 00:52:11.756: INFO: Waiting for Pod statefulset-9265/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 1 00:52:21.775: INFO: Waiting for StatefulSet statefulset-9265/ss2 to complete update May 1 00:52:21.775: INFO: Waiting for Pod statefulset-9265/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 1 00:52:21.775: INFO: Waiting for Pod statefulset-9265/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 1 00:52:31.764: INFO: Waiting for StatefulSet statefulset-9265/ss2 to complete update May 1 00:52:31.764: INFO: Waiting for Pod statefulset-9265/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 1 00:52:41.764: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9265 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 1 00:52:42.058: INFO: stderr: "I0501 00:52:41.908440 2063 log.go:172] (0xc000985a20) (0xc00096e3c0) Create stream\nI0501 00:52:41.908520 2063 log.go:172] (0xc000985a20) (0xc00096e3c0) Stream added, broadcasting: 1\nI0501 00:52:41.912657 2063 log.go:172] (0xc000985a20) Reply frame received for 1\nI0501 00:52:41.912732 2063 log.go:172] (0xc000985a20) (0xc000620aa0) Create stream\nI0501 00:52:41.912781 2063 log.go:172] (0xc000985a20) (0xc000620aa0) Stream added, broadcasting: 3\nI0501 00:52:41.915143 2063 log.go:172] (0xc000985a20) Reply frame received for 3\nI0501 00:52:41.915181 2063 log.go:172] (0xc000985a20) (0xc0006d4f00) Create stream\nI0501 00:52:41.915191 2063 log.go:172] (0xc000985a20) (0xc0006d4f00) Stream added, broadcasting: 5\nI0501 00:52:41.916190 2063 log.go:172] (0xc000985a20) Reply frame received for 5\nI0501 00:52:42.008974 2063 log.go:172] (0xc000985a20) Data frame received for 5\nI0501 00:52:42.009001 2063 log.go:172] (0xc0006d4f00) (5) Data frame handling\nI0501 00:52:42.009021 2063 log.go:172] (0xc0006d4f00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 00:52:42.047276 2063 log.go:172] (0xc000985a20) Data frame received for 5\nI0501 00:52:42.047307 2063 log.go:172] (0xc0006d4f00) (5) Data frame handling\nI0501 00:52:42.047352 2063 log.go:172] (0xc000985a20) Data frame received for 3\nI0501 00:52:42.047399 2063 log.go:172] (0xc000620aa0) (3) Data frame handling\nI0501 00:52:42.047424 2063 log.go:172] (0xc000620aa0) (3) Data frame sent\nI0501 00:52:42.047441 2063 log.go:172] (0xc000985a20) Data frame received for 3\nI0501 00:52:42.047457 2063 log.go:172] (0xc000620aa0) (3) Data frame handling\nI0501 00:52:42.054196 2063 log.go:172] (0xc000985a20) Data frame received for 1\nI0501 00:52:42.054221 2063 log.go:172] (0xc00096e3c0) (1) Data frame handling\nI0501 00:52:42.054241 2063 log.go:172] (0xc00096e3c0) (1) Data frame sent\nI0501 00:52:42.054252 2063 log.go:172] (0xc000985a20) (0xc00096e3c0) Stream removed, broadcasting: 1\nI0501 00:52:42.054262 2063 log.go:172] (0xc000985a20) Go away received\nI0501 00:52:42.054587 2063 log.go:172] (0xc000985a20) (0xc00096e3c0) Stream removed, broadcasting: 1\nI0501 00:52:42.054603 2063 log.go:172] (0xc000985a20) (0xc000620aa0) Stream removed, broadcasting: 3\nI0501 00:52:42.054611 2063 log.go:172] (0xc000985a20) (0xc0006d4f00) Stream removed, broadcasting: 5\n" May 1 00:52:42.059: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 1 00:52:42.059: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 1 00:52:52.093: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 1 00:53:02.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9265 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 00:53:02.407: INFO: stderr: "I0501 00:53:02.319922 2084 log.go:172] (0xc00003be40) (0xc000375d60) Create stream\nI0501 00:53:02.320000 2084 log.go:172] (0xc00003be40) (0xc000375d60) Stream added, broadcasting: 1\nI0501 00:53:02.323361 2084 log.go:172] (0xc00003be40) Reply frame received for 1\nI0501 00:53:02.323393 2084 log.go:172] (0xc00003be40) (0xc00051ebe0) Create stream\nI0501 00:53:02.323403 2084 log.go:172] (0xc00003be40) (0xc00051ebe0) Stream added, broadcasting: 3\nI0501 00:53:02.324381 2084 log.go:172] (0xc00003be40) Reply frame received for 3\nI0501 00:53:02.324411 2084 log.go:172] (0xc00003be40) (0xc0004fc460) Create stream\nI0501 00:53:02.324422 2084 log.go:172] (0xc00003be40) (0xc0004fc460) Stream added, broadcasting: 5\nI0501 00:53:02.325735 2084 log.go:172] (0xc00003be40) Reply frame received for 5\nI0501 00:53:02.399677 2084 log.go:172] (0xc00003be40) Data frame received for 5\nI0501 00:53:02.399733 2084 log.go:172] (0xc0004fc460) (5) Data frame handling\nI0501 00:53:02.399759 2084 log.go:172] (0xc0004fc460) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0501 00:53:02.399788 2084 log.go:172] (0xc00003be40) Data frame received for 3\nI0501 00:53:02.399805 2084 log.go:172] (0xc00051ebe0) (3) Data frame handling\nI0501 00:53:02.399837 2084 log.go:172] (0xc00051ebe0) (3) Data frame sent\nI0501 00:53:02.399855 2084 log.go:172] (0xc00003be40) Data frame received for 3\nI0501 00:53:02.399870 2084 log.go:172] (0xc00051ebe0) (3) Data frame handling\nI0501 00:53:02.399886 2084 log.go:172] (0xc00003be40) Data frame received for 5\nI0501 00:53:02.399900 2084 log.go:172] (0xc0004fc460) (5) Data frame handling\nI0501 00:53:02.401619 2084 log.go:172] (0xc00003be40) Data frame received for 1\nI0501 00:53:02.401648 2084 log.go:172] (0xc000375d60) (1) Data frame handling\nI0501 00:53:02.401669 2084 log.go:172] (0xc000375d60) (1) Data frame sent\nI0501 00:53:02.401704 2084 log.go:172] (0xc00003be40) (0xc000375d60) Stream removed, broadcasting: 1\nI0501 00:53:02.401822 2084 log.go:172] (0xc00003be40) Go away received\nI0501 00:53:02.402107 2084 log.go:172] (0xc00003be40) (0xc000375d60) Stream removed, broadcasting: 1\nI0501 00:53:02.402126 2084 log.go:172] (0xc00003be40) (0xc00051ebe0) Stream removed, broadcasting: 3\nI0501 00:53:02.402136 2084 log.go:172] (0xc00003be40) (0xc0004fc460) Stream removed, broadcasting: 5\n" May 1 00:53:02.407: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 1 00:53:02.407: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 1 00:53:12.429: INFO: Waiting for StatefulSet statefulset-9265/ss2 to complete update May 1 00:53:12.429: INFO: Waiting for Pod statefulset-9265/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 1 00:53:12.429: INFO: Waiting for Pod statefulset-9265/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 1 00:53:12.429: INFO: Waiting for Pod statefulset-9265/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 1 00:53:22.438: INFO: Waiting for StatefulSet statefulset-9265/ss2 to complete update May 1 00:53:22.438: INFO: Waiting for Pod statefulset-9265/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 1 00:53:22.438: INFO: Waiting for Pod statefulset-9265/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 1 00:53:32.436: INFO: Waiting for StatefulSet statefulset-9265/ss2 to complete update May 1 00:53:32.436: INFO: Waiting for Pod statefulset-9265/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 1 00:53:42.455: INFO: Deleting all statefulset in ns statefulset-9265 May 1 00:53:42.458: INFO: Scaling statefulset ss2 to 0 May 1 00:54:02.515: INFO: Waiting for statefulset status.replicas updated to 0 May 1 00:54:02.517: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:54:02.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9265" for this suite. • [SLOW TEST:164.378 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":290,"completed":201,"skipped":3368,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:54:02.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 1 00:54:02.670: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4287 /api/v1/namespaces/watch-4287/configmaps/e2e-watch-test-resource-version eb2f0a3a-a95f-48c6-a032-2e1517c216d6 463462 0 2020-05-01 00:54:02 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-01 00:54:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 1 00:54:02.671: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4287 /api/v1/namespaces/watch-4287/configmaps/e2e-watch-test-resource-version eb2f0a3a-a95f-48c6-a032-2e1517c216d6 463463 0 2020-05-01 00:54:02 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-01 00:54:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:54:02.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4287" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":290,"completed":202,"skipped":3374,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:54:02.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 1 00:54:02.777: INFO: Waiting up to 5m0s for pod "downward-api-7c130fc1-bca2-4cc0-84bc-e1ae0f145a7c" in namespace "downward-api-8927" to be "Succeeded or Failed" May 1 00:54:02.823: INFO: Pod "downward-api-7c130fc1-bca2-4cc0-84bc-e1ae0f145a7c": Phase="Pending", Reason="", readiness=false. Elapsed: 46.864272ms May 1 00:54:04.827: INFO: Pod "downward-api-7c130fc1-bca2-4cc0-84bc-e1ae0f145a7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050434167s May 1 00:54:06.900: INFO: Pod "downward-api-7c130fc1-bca2-4cc0-84bc-e1ae0f145a7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123461851s STEP: Saw pod success May 1 00:54:06.900: INFO: Pod "downward-api-7c130fc1-bca2-4cc0-84bc-e1ae0f145a7c" satisfied condition "Succeeded or Failed" May 1 00:54:06.904: INFO: Trying to get logs from node latest-worker2 pod downward-api-7c130fc1-bca2-4cc0-84bc-e1ae0f145a7c container dapi-container: STEP: delete the pod May 1 00:54:06.957: INFO: Waiting for pod downward-api-7c130fc1-bca2-4cc0-84bc-e1ae0f145a7c to disappear May 1 00:54:06.967: INFO: Pod downward-api-7c130fc1-bca2-4cc0-84bc-e1ae0f145a7c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:54:06.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8927" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":290,"completed":203,"skipped":3383,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:54:06.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3669.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3669.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 00:54:13.204: INFO: DNS probes using dns-3669/dns-test-72745949-1749-4619-a9f8-d48325e0b9f0 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:54:13.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3669" for this suite. • [SLOW TEST:6.358 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":290,"completed":204,"skipped":3397,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:54:13.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode May 1 00:54:13.798: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8597" to be "Succeeded or Failed" May 1 00:54:13.812: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.373348ms May 1 00:54:15.906: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108809188s May 1 00:54:17.911: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113401677s May 1 00:54:19.915: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117853269s STEP: Saw pod success May 1 00:54:19.915: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 1 00:54:19.918: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 1 00:54:20.062: INFO: Waiting for pod pod-host-path-test to disappear May 1 00:54:20.075: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:54:20.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-8597" for this suite. • [SLOW TEST:6.748 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":205,"skipped":3451,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:54:20.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 1 00:54:20.176: INFO: Waiting up to 5m0s for pod "pod-557d5e82-7ec6-4ba9-8fbf-323c6d232d39" in namespace "emptydir-7571" to be "Succeeded or Failed" May 1 00:54:20.179: INFO: Pod "pod-557d5e82-7ec6-4ba9-8fbf-323c6d232d39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.958236ms May 1 00:54:22.239: INFO: Pod "pod-557d5e82-7ec6-4ba9-8fbf-323c6d232d39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062755392s May 1 00:54:24.242: INFO: Pod "pod-557d5e82-7ec6-4ba9-8fbf-323c6d232d39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066651226s STEP: Saw pod success May 1 00:54:24.243: INFO: Pod "pod-557d5e82-7ec6-4ba9-8fbf-323c6d232d39" satisfied condition "Succeeded or Failed" May 1 00:54:24.246: INFO: Trying to get logs from node latest-worker pod pod-557d5e82-7ec6-4ba9-8fbf-323c6d232d39 container test-container: STEP: delete the pod May 1 00:54:24.368: INFO: Waiting for pod pod-557d5e82-7ec6-4ba9-8fbf-323c6d232d39 to disappear May 1 00:54:24.414: INFO: Pod pod-557d5e82-7ec6-4ba9-8fbf-323c6d232d39 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:54:24.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7571" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":206,"skipped":3455,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:54:24.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 1 00:54:29.086: INFO: Successfully updated pod "adopt-release-ckg7b" STEP: Checking that the Job readopts the Pod May 1 00:54:29.086: INFO: Waiting up to 15m0s for pod "adopt-release-ckg7b" in namespace "job-5093" to be "adopted" May 1 00:54:29.119: INFO: Pod "adopt-release-ckg7b": Phase="Running", Reason="", readiness=true. Elapsed: 32.479023ms May 1 00:54:31.123: INFO: Pod "adopt-release-ckg7b": Phase="Running", Reason="", readiness=true. Elapsed: 2.036794s May 1 00:54:31.123: INFO: Pod "adopt-release-ckg7b" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 1 00:54:31.634: INFO: Successfully updated pod "adopt-release-ckg7b" STEP: Checking that the Job releases the Pod May 1 00:54:31.634: INFO: Waiting up to 15m0s for pod "adopt-release-ckg7b" in namespace "job-5093" to be "released" May 1 00:54:31.653: INFO: Pod "adopt-release-ckg7b": Phase="Running", Reason="", readiness=true. Elapsed: 18.87921ms May 1 00:54:33.658: INFO: Pod "adopt-release-ckg7b": Phase="Running", Reason="", readiness=true. Elapsed: 2.023516653s May 1 00:54:33.658: INFO: Pod "adopt-release-ckg7b" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 1 00:54:33.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5093" for this suite. • [SLOW TEST:9.241 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":290,"completed":207,"skipped":3473,"failed":0} SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 1 00:54:33.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 1 00:54:33.783: INFO: (0) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating a secret
STEP: listing secrets in all namespaces to ensure that there are more than zero
STEP: patching the secret
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 00:54:34.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8769" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":290,"completed":209,"skipped":3486,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 00:54:34.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May  1 00:54:34.378: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6278642e-4e89-4609-97da-0769e1ec349f" in namespace "projected-7418" to be "Succeeded or Failed"
May  1 00:54:34.387: INFO: Pod "downwardapi-volume-6278642e-4e89-4609-97da-0769e1ec349f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.391729ms
May  1 00:54:36.392: INFO: Pod "downwardapi-volume-6278642e-4e89-4609-97da-0769e1ec349f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013918758s
May  1 00:54:38.396: INFO: Pod "downwardapi-volume-6278642e-4e89-4609-97da-0769e1ec349f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018419721s
STEP: Saw pod success
May  1 00:54:38.396: INFO: Pod "downwardapi-volume-6278642e-4e89-4609-97da-0769e1ec349f" satisfied condition "Succeeded or Failed"
May  1 00:54:38.400: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-6278642e-4e89-4609-97da-0769e1ec349f container client-container: 
STEP: delete the pod
May  1 00:54:38.432: INFO: Waiting for pod downwardapi-volume-6278642e-4e89-4609-97da-0769e1ec349f to disappear
May  1 00:54:38.448: INFO: Pod downwardapi-volume-6278642e-4e89-4609-97da-0769e1ec349f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 00:54:38.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7418" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":290,"completed":210,"skipped":3500,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 00:54:38.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 00:54:38.604: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"a13ea2a0-f75b-4d0c-8394-fdec45d04f7f", Controller:(*bool)(0xc004b97eaa), BlockOwnerDeletion:(*bool)(0xc004b97eab)}}
May  1 00:54:38.636: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"edfc429b-f98e-4ee2-8f88-5e118a447e44", Controller:(*bool)(0xc00484a17a), BlockOwnerDeletion:(*bool)(0xc00484a17b)}}
May  1 00:54:38.672: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"51057912-aaad-42a1-a2d2-67113ca33692", Controller:(*bool)(0xc004a5a592), BlockOwnerDeletion:(*bool)(0xc004a5a593)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 00:54:43.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-685" for this suite.

• [SLOW TEST:5.291 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":290,"completed":211,"skipped":3531,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 00:54:43.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
May  1 00:54:43.965: INFO: Created pod &Pod{ObjectMeta:{dns-1484  dns-1484 /api/v1/namespaces/dns-1484/pods/dns-1484 794f2ad1-ed16-47da-88d9-fbaec003da64 463920 0 2020-05-01 00:54:43 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2020-05-01 00:54:43 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gk2ml,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gk2ml,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gk2ml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 00:54:44.017: INFO: The status of Pod dns-1484 is Pending, waiting for it to be Running (with Ready = true)
May  1 00:54:46.022: INFO: The status of Pod dns-1484 is Pending, waiting for it to be Running (with Ready = true)
May  1 00:54:48.021: INFO: The status of Pod dns-1484 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
May  1 00:54:48.021: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1484 PodName:dns-1484 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 00:54:48.021: INFO: >>> kubeConfig: /root/.kube/config
I0501 00:54:48.055074       7 log.go:172] (0xc002f224d0) (0xc000be0dc0) Create stream
I0501 00:54:48.055116       7 log.go:172] (0xc002f224d0) (0xc000be0dc0) Stream added, broadcasting: 1
I0501 00:54:48.056941       7 log.go:172] (0xc002f224d0) Reply frame received for 1
I0501 00:54:48.056970       7 log.go:172] (0xc002f224d0) (0xc001374780) Create stream
I0501 00:54:48.056982       7 log.go:172] (0xc002f224d0) (0xc001374780) Stream added, broadcasting: 3
I0501 00:54:48.057949       7 log.go:172] (0xc002f224d0) Reply frame received for 3
I0501 00:54:48.057977       7 log.go:172] (0xc002f224d0) (0xc000be14a0) Create stream
I0501 00:54:48.057991       7 log.go:172] (0xc002f224d0) (0xc000be14a0) Stream added, broadcasting: 5
I0501 00:54:48.058691       7 log.go:172] (0xc002f224d0) Reply frame received for 5
I0501 00:54:48.117041       7 log.go:172] (0xc002f224d0) Data frame received for 3
I0501 00:54:48.117069       7 log.go:172] (0xc001374780) (3) Data frame handling
I0501 00:54:48.117085       7 log.go:172] (0xc001374780) (3) Data frame sent
I0501 00:54:48.118183       7 log.go:172] (0xc002f224d0) Data frame received for 3
I0501 00:54:48.118213       7 log.go:172] (0xc001374780) (3) Data frame handling
I0501 00:54:48.118237       7 log.go:172] (0xc002f224d0) Data frame received for 5
I0501 00:54:48.118278       7 log.go:172] (0xc000be14a0) (5) Data frame handling
I0501 00:54:48.120058       7 log.go:172] (0xc002f224d0) Data frame received for 1
I0501 00:54:48.120076       7 log.go:172] (0xc000be0dc0) (1) Data frame handling
I0501 00:54:48.120096       7 log.go:172] (0xc000be0dc0) (1) Data frame sent
I0501 00:54:48.120111       7 log.go:172] (0xc002f224d0) (0xc000be0dc0) Stream removed, broadcasting: 1
I0501 00:54:48.120127       7 log.go:172] (0xc002f224d0) Go away received
I0501 00:54:48.120267       7 log.go:172] (0xc002f224d0) (0xc000be0dc0) Stream removed, broadcasting: 1
I0501 00:54:48.120290       7 log.go:172] (0xc002f224d0) (0xc001374780) Stream removed, broadcasting: 3
I0501 00:54:48.120300       7 log.go:172] (0xc002f224d0) (0xc000be14a0) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
May  1 00:54:48.120: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1484 PodName:dns-1484 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 00:54:48.120: INFO: >>> kubeConfig: /root/.kube/config
I0501 00:54:48.147391       7 log.go:172] (0xc002e0f080) (0xc000c0b7c0) Create stream
I0501 00:54:48.147419       7 log.go:172] (0xc002e0f080) (0xc000c0b7c0) Stream added, broadcasting: 1
I0501 00:54:48.149636       7 log.go:172] (0xc002e0f080) Reply frame received for 1
I0501 00:54:48.149682       7 log.go:172] (0xc002e0f080) (0xc000be1860) Create stream
I0501 00:54:48.149697       7 log.go:172] (0xc002e0f080) (0xc000be1860) Stream added, broadcasting: 3
I0501 00:54:48.150465       7 log.go:172] (0xc002e0f080) Reply frame received for 3
I0501 00:54:48.150498       7 log.go:172] (0xc002e0f080) (0xc000c0b900) Create stream
I0501 00:54:48.150508       7 log.go:172] (0xc002e0f080) (0xc000c0b900) Stream added, broadcasting: 5
I0501 00:54:48.151349       7 log.go:172] (0xc002e0f080) Reply frame received for 5
I0501 00:54:48.226918       7 log.go:172] (0xc002e0f080) Data frame received for 3
I0501 00:54:48.226956       7 log.go:172] (0xc000be1860) (3) Data frame handling
I0501 00:54:48.226979       7 log.go:172] (0xc000be1860) (3) Data frame sent
I0501 00:54:48.227758       7 log.go:172] (0xc002e0f080) Data frame received for 3
I0501 00:54:48.227796       7 log.go:172] (0xc000be1860) (3) Data frame handling
I0501 00:54:48.227818       7 log.go:172] (0xc002e0f080) Data frame received for 5
I0501 00:54:48.227831       7 log.go:172] (0xc000c0b900) (5) Data frame handling
I0501 00:54:48.229408       7 log.go:172] (0xc002e0f080) Data frame received for 1
I0501 00:54:48.229433       7 log.go:172] (0xc000c0b7c0) (1) Data frame handling
I0501 00:54:48.229457       7 log.go:172] (0xc000c0b7c0) (1) Data frame sent
I0501 00:54:48.229478       7 log.go:172] (0xc002e0f080) (0xc000c0b7c0) Stream removed, broadcasting: 1
I0501 00:54:48.229495       7 log.go:172] (0xc002e0f080) Go away received
I0501 00:54:48.229622       7 log.go:172] (0xc002e0f080) (0xc000c0b7c0) Stream removed, broadcasting: 1
I0501 00:54:48.229644       7 log.go:172] (0xc002e0f080) (0xc000be1860) Stream removed, broadcasting: 3
I0501 00:54:48.229651       7 log.go:172] (0xc002e0f080) (0xc000c0b900) Stream removed, broadcasting: 5
May  1 00:54:48.229: INFO: Deleting pod dns-1484...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 00:54:48.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1484" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":290,"completed":212,"skipped":3544,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 00:54:48.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 00:54:48.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1458" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":290,"completed":213,"skipped":3583,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 00:54:48.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-projected-all-test-volume-918d8fc4-3ce6-4775-8fb8-66a64a1f8fde
STEP: Creating secret with name secret-projected-all-test-volume-6ccf3b74-1bb4-41d4-a2b3-73645b922083
STEP: Creating a pod to test Check all projections for projected volume plugin
May  1 00:54:48.834: INFO: Waiting up to 5m0s for pod "projected-volume-f1ce0493-92b0-4b1a-bd4d-7251a1cad2d6" in namespace "projected-160" to be "Succeeded or Failed"
May  1 00:54:48.918: INFO: Pod "projected-volume-f1ce0493-92b0-4b1a-bd4d-7251a1cad2d6": Phase="Pending", Reason="", readiness=false. Elapsed: 84.221811ms
May  1 00:54:50.922: INFO: Pod "projected-volume-f1ce0493-92b0-4b1a-bd4d-7251a1cad2d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088698968s
May  1 00:54:52.927: INFO: Pod "projected-volume-f1ce0493-92b0-4b1a-bd4d-7251a1cad2d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093603528s
STEP: Saw pod success
May  1 00:54:52.927: INFO: Pod "projected-volume-f1ce0493-92b0-4b1a-bd4d-7251a1cad2d6" satisfied condition "Succeeded or Failed"
May  1 00:54:52.930: INFO: Trying to get logs from node latest-worker2 pod projected-volume-f1ce0493-92b0-4b1a-bd4d-7251a1cad2d6 container projected-all-volume-test: 
STEP: delete the pod
May  1 00:54:53.034: INFO: Waiting for pod projected-volume-f1ce0493-92b0-4b1a-bd4d-7251a1cad2d6 to disappear
May  1 00:54:53.059: INFO: Pod projected-volume-f1ce0493-92b0-4b1a-bd4d-7251a1cad2d6 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 00:54:53.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-160" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":290,"completed":214,"skipped":3594,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 00:54:53.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 00:54:57.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7724" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":290,"completed":215,"skipped":3640,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 00:54:57.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: validating cluster-info
May  1 00:54:57.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info'
May  1 00:54:57.485: INFO: stderr: ""
May  1 00:54:57.485: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 00:54:57.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1479" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":290,"completed":216,"skipped":3641,"failed":0}
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 00:54:57.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
May  1 00:54:57.684: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 00:54:57.700: INFO: Number of nodes with available pods: 0
May  1 00:54:57.700: INFO: Node latest-worker is running more than one daemon pod
May  1 00:54:58.706: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 00:54:58.710: INFO: Number of nodes with available pods: 0
May  1 00:54:58.710: INFO: Node latest-worker is running more than one daemon pod
May  1 00:54:59.949: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 00:54:59.995: INFO: Number of nodes with available pods: 0
May  1 00:54:59.995: INFO: Node latest-worker is running more than one daemon pod
May  1 00:55:00.785: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 00:55:00.789: INFO: Number of nodes with available pods: 0
May  1 00:55:00.789: INFO: Node latest-worker is running more than one daemon pod
May  1 00:55:01.716: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 00:55:01.731: INFO: Number of nodes with available pods: 2
May  1 00:55:01.731: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
May  1 00:55:01.782: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 00:55:01.785: INFO: Number of nodes with available pods: 1
May  1 00:55:01.785: INFO: Node latest-worker is running more than one daemon pod
May  1 00:55:02.800: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 00:55:02.803: INFO: Number of nodes with available pods: 1
May  1 00:55:02.803: INFO: Node latest-worker is running more than one daemon pod
May  1 00:55:03.791: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 00:55:03.796: INFO: Number of nodes with available pods: 1
May  1 00:55:03.796: INFO: Node latest-worker is running more than one daemon pod
May  1 00:55:04.791: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 00:55:04.795: INFO: Number of nodes with available pods: 1
May  1 00:55:04.795: INFO: Node latest-worker is running more than one daemon pod
May  1 00:55:05.791: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 00:55:05.795: INFO: Number of nodes with available pods: 1
May  1 00:55:05.795: INFO: Node latest-worker is running more than one daemon pod
May  1 00:55:06.791: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 00:55:06.796: INFO: Number of nodes with available pods: 1
May  1 00:55:06.796: INFO: Node latest-worker is running more than one daemon pod
May  1 00:55:07.790: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 00:55:07.793: INFO: Number of nodes with available pods: 1
May  1 00:55:07.793: INFO: Node latest-worker is running more than one daemon pod
May  1 00:55:08.791: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 00:55:08.796: INFO: Number of nodes with available pods: 2
May  1 00:55:08.796: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-671, will wait for the garbage collector to delete the pods
May  1 00:55:08.860: INFO: Deleting DaemonSet.extensions daemon-set took: 7.023698ms
May  1 00:55:09.160: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.250481ms
May  1 00:55:24.882: INFO: Number of nodes with available pods: 0
May  1 00:55:24.882: INFO: Number of running nodes: 0, number of available pods: 0
May  1 00:55:24.884: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-671/daemonsets","resourceVersion":"464208"},"items":null}

May  1 00:55:24.886: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-671/pods","resourceVersion":"464208"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 00:55:24.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-671" for this suite.

• [SLOW TEST:27.390 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":290,"completed":217,"skipped":3646,"failed":0}
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 00:55:24.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 00:55:24.969: INFO: (0) /api/v1/nodes/latest-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-34ba0c10-7183-42f4-b347-c25a6100f8e7
STEP: Creating a pod to test consume configMaps
May  1 00:55:25.177: INFO: Waiting up to 5m0s for pod "pod-configmaps-d01e25fb-9c41-425e-b1f1-37864d626e60" in namespace "configmap-2242" to be "Succeeded or Failed"
May  1 00:55:25.194: INFO: Pod "pod-configmaps-d01e25fb-9c41-425e-b1f1-37864d626e60": Phase="Pending", Reason="", readiness=false. Elapsed: 17.278355ms
May  1 00:55:27.199: INFO: Pod "pod-configmaps-d01e25fb-9c41-425e-b1f1-37864d626e60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021770595s
May  1 00:55:29.204: INFO: Pod "pod-configmaps-d01e25fb-9c41-425e-b1f1-37864d626e60": Phase="Running", Reason="", readiness=true. Elapsed: 4.026709832s
May  1 00:55:31.209: INFO: Pod "pod-configmaps-d01e25fb-9c41-425e-b1f1-37864d626e60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031715031s
STEP: Saw pod success
May  1 00:55:31.209: INFO: Pod "pod-configmaps-d01e25fb-9c41-425e-b1f1-37864d626e60" satisfied condition "Succeeded or Failed"
May  1 00:55:31.213: INFO: Trying to get logs from node latest-worker pod pod-configmaps-d01e25fb-9c41-425e-b1f1-37864d626e60 container configmap-volume-test: 
STEP: delete the pod
May  1 00:55:31.260: INFO: Waiting for pod pod-configmaps-d01e25fb-9c41-425e-b1f1-37864d626e60 to disappear
May  1 00:55:31.263: INFO: Pod pod-configmaps-d01e25fb-9c41-425e-b1f1-37864d626e60 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 00:55:31.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2242" for this suite.

• [SLOW TEST:6.198 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":290,"completed":219,"skipped":3650,"failed":0}
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 00:55:31.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311
STEP: creating the pod
May  1 00:55:31.341: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7553'
May  1 00:55:31.793: INFO: stderr: ""
May  1 00:55:31.794: INFO: stdout: "pod/pause created\n"
May  1 00:55:31.794: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
May  1 00:55:31.794: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7553" to be "running and ready"
May  1 00:55:31.809: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 15.382282ms
May  1 00:55:33.939: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145408951s
May  1 00:55:35.944: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.149896469s
May  1 00:55:35.944: INFO: Pod "pause" satisfied condition "running and ready"
May  1 00:55:35.944: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: adding the label testing-label with value testing-label-value to a pod
May  1 00:55:35.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7553'
May  1 00:55:36.053: INFO: stderr: ""
May  1 00:55:36.053: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
May  1 00:55:36.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7553'
May  1 00:55:36.159: INFO: stderr: ""
May  1 00:55:36.159: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    testing-label-value\n"
STEP: removing the label testing-label of a pod
May  1 00:55:36.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7553'
May  1 00:55:36.276: INFO: stderr: ""
May  1 00:55:36.276: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
May  1 00:55:36.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7553'
May  1 00:55:36.386: INFO: stderr: ""
May  1 00:55:36.386: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318
STEP: using delete to clean up resources
May  1 00:55:36.386: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7553'
May  1 00:55:36.581: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  1 00:55:36.581: INFO: stdout: "pod \"pause\" force deleted\n"
May  1 00:55:36.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7553'
May  1 00:55:36.696: INFO: stderr: "No resources found in kubectl-7553 namespace.\n"
May  1 00:55:36.696: INFO: stdout: ""
May  1 00:55:36.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7553 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May  1 00:55:36.925: INFO: stderr: ""
May  1 00:55:36.925: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 00:55:36.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7553" for this suite.

• [SLOW TEST:5.664 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":290,"completed":220,"skipped":3650,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 00:55:36.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103
STEP: Creating service test in namespace statefulset-3836
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating stateful set ss in namespace statefulset-3836
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3836
May  1 00:55:37.326: INFO: Found 0 stateful pods, waiting for 1
May  1 00:55:47.331: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
May  1 00:55:47.335: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  1 00:55:47.606: INFO: stderr: "I0501 00:55:47.504419    2287 log.go:172] (0xc00097b550) (0xc0009181e0) Create stream\nI0501 00:55:47.504457    2287 log.go:172] (0xc00097b550) (0xc0009181e0) Stream added, broadcasting: 1\nI0501 00:55:47.514031    2287 log.go:172] (0xc00097b550) Reply frame received for 1\nI0501 00:55:47.514103    2287 log.go:172] (0xc00097b550) (0xc0009d61e0) Create stream\nI0501 00:55:47.514123    2287 log.go:172] (0xc00097b550) (0xc0009d61e0) Stream added, broadcasting: 3\nI0501 00:55:47.515417    2287 log.go:172] (0xc00097b550) Reply frame received for 3\nI0501 00:55:47.515456    2287 log.go:172] (0xc00097b550) (0xc000616fa0) Create stream\nI0501 00:55:47.515466    2287 log.go:172] (0xc00097b550) (0xc000616fa0) Stream added, broadcasting: 5\nI0501 00:55:47.516330    2287 log.go:172] (0xc00097b550) Reply frame received for 5\nI0501 00:55:47.571648    2287 log.go:172] (0xc00097b550) Data frame received for 5\nI0501 00:55:47.571705    2287 log.go:172] (0xc000616fa0) (5) Data frame handling\nI0501 00:55:47.571732    2287 log.go:172] (0xc000616fa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 00:55:47.600342    2287 log.go:172] (0xc00097b550) Data frame received for 5\nI0501 00:55:47.600373    2287 log.go:172] (0xc000616fa0) (5) Data frame handling\nI0501 00:55:47.600390    2287 log.go:172] (0xc00097b550) Data frame received for 3\nI0501 00:55:47.600396    2287 log.go:172] (0xc0009d61e0) (3) Data frame handling\nI0501 00:55:47.600403    2287 log.go:172] (0xc0009d61e0) (3) Data frame sent\nI0501 00:55:47.600497    2287 log.go:172] (0xc00097b550) Data frame received for 3\nI0501 00:55:47.600512    2287 log.go:172] (0xc0009d61e0) (3) Data frame handling\nI0501 00:55:47.602583    2287 log.go:172] (0xc00097b550) Data frame received for 1\nI0501 00:55:47.602621    2287 log.go:172] (0xc0009181e0) (1) Data frame handling\nI0501 00:55:47.602644    2287 log.go:172] (0xc0009181e0) (1) Data frame sent\nI0501 00:55:47.602676    2287 log.go:172] (0xc00097b550) (0xc0009181e0) Stream removed, broadcasting: 1\nI0501 00:55:47.602706    2287 log.go:172] (0xc00097b550) Go away received\nI0501 00:55:47.602950    2287 log.go:172] (0xc00097b550) (0xc0009181e0) Stream removed, broadcasting: 1\nI0501 00:55:47.602963    2287 log.go:172] (0xc00097b550) (0xc0009d61e0) Stream removed, broadcasting: 3\nI0501 00:55:47.602968    2287 log.go:172] (0xc00097b550) (0xc000616fa0) Stream removed, broadcasting: 5\n"
May  1 00:55:47.607: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  1 00:55:47.607: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  1 00:55:47.631: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
May  1 00:55:57.655: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
May  1 00:55:57.655: INFO: Waiting for statefulset status.replicas updated to 0
May  1 00:55:57.683: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
May  1 00:55:57.684: INFO: ss-0  latest-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  }]
May  1 00:55:57.684: INFO: 
May  1 00:55:57.684: INFO: StatefulSet ss has not reached scale 3, at 1
May  1 00:55:58.689: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992623852s
May  1 00:55:59.889: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986953582s
May  1 00:56:00.961: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.786806905s
May  1 00:56:01.967: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.715211503s
May  1 00:56:02.972: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.709333613s
May  1 00:56:03.977: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.704294728s
May  1 00:56:04.982: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.698841511s
May  1 00:56:05.987: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.69421481s
May  1 00:56:06.992: INFO: Verifying statefulset ss doesn't scale past 3 for another 689.095091ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3836
May  1 00:56:07.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:56:08.211: INFO: stderr: "I0501 00:56:08.133275    2303 log.go:172] (0xc000a8e000) (0xc0006a90e0) Create stream\nI0501 00:56:08.133334    2303 log.go:172] (0xc000a8e000) (0xc0006a90e0) Stream added, broadcasting: 1\nI0501 00:56:08.135416    2303 log.go:172] (0xc000a8e000) Reply frame received for 1\nI0501 00:56:08.135465    2303 log.go:172] (0xc000a8e000) (0xc0005823c0) Create stream\nI0501 00:56:08.135475    2303 log.go:172] (0xc000a8e000) (0xc0005823c0) Stream added, broadcasting: 3\nI0501 00:56:08.136242    2303 log.go:172] (0xc000a8e000) Reply frame received for 3\nI0501 00:56:08.136268    2303 log.go:172] (0xc000a8e000) (0xc0004e4f00) Create stream\nI0501 00:56:08.136278    2303 log.go:172] (0xc000a8e000) (0xc0004e4f00) Stream added, broadcasting: 5\nI0501 00:56:08.136977    2303 log.go:172] (0xc000a8e000) Reply frame received for 5\nI0501 00:56:08.204655    2303 log.go:172] (0xc000a8e000) Data frame received for 5\nI0501 00:56:08.204684    2303 log.go:172] (0xc0004e4f00) (5) Data frame handling\nI0501 00:56:08.204693    2303 log.go:172] (0xc0004e4f00) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0501 00:56:08.204706    2303 log.go:172] (0xc000a8e000) Data frame received for 3\nI0501 00:56:08.204718    2303 log.go:172] (0xc000a8e000) Data frame received for 5\nI0501 00:56:08.204728    2303 log.go:172] (0xc0004e4f00) (5) Data frame handling\nI0501 00:56:08.204742    2303 log.go:172] (0xc0005823c0) (3) Data frame handling\nI0501 00:56:08.204752    2303 log.go:172] (0xc0005823c0) (3) Data frame sent\nI0501 00:56:08.204758    2303 log.go:172] (0xc000a8e000) Data frame received for 3\nI0501 00:56:08.204762    2303 log.go:172] (0xc0005823c0) (3) Data frame handling\nI0501 00:56:08.206641    2303 log.go:172] (0xc000a8e000) Data frame received for 1\nI0501 00:56:08.206661    2303 log.go:172] (0xc0006a90e0) (1) Data frame handling\nI0501 00:56:08.206668    2303 log.go:172] (0xc0006a90e0) (1) Data frame sent\nI0501 00:56:08.206677    2303 log.go:172] (0xc000a8e000) (0xc0006a90e0) Stream removed, broadcasting: 1\nI0501 00:56:08.206761    2303 log.go:172] (0xc000a8e000) Go away received\nI0501 00:56:08.206938    2303 log.go:172] (0xc000a8e000) (0xc0006a90e0) Stream removed, broadcasting: 1\nI0501 00:56:08.206950    2303 log.go:172] (0xc000a8e000) (0xc0005823c0) Stream removed, broadcasting: 3\nI0501 00:56:08.206956    2303 log.go:172] (0xc000a8e000) (0xc0004e4f00) Stream removed, broadcasting: 5\n"
May  1 00:56:08.211: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  1 00:56:08.211: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  1 00:56:08.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:56:08.442: INFO: stderr: "I0501 00:56:08.352082    2326 log.go:172] (0xc000927600) (0xc000be41e0) Create stream\nI0501 00:56:08.352142    2326 log.go:172] (0xc000927600) (0xc000be41e0) Stream added, broadcasting: 1\nI0501 00:56:08.356502    2326 log.go:172] (0xc000927600) Reply frame received for 1\nI0501 00:56:08.356534    2326 log.go:172] (0xc000927600) (0xc00080e5a0) Create stream\nI0501 00:56:08.356543    2326 log.go:172] (0xc000927600) (0xc00080e5a0) Stream added, broadcasting: 3\nI0501 00:56:08.357763    2326 log.go:172] (0xc000927600) Reply frame received for 3\nI0501 00:56:08.357794    2326 log.go:172] (0xc000927600) (0xc000798c80) Create stream\nI0501 00:56:08.357805    2326 log.go:172] (0xc000927600) (0xc000798c80) Stream added, broadcasting: 5\nI0501 00:56:08.358689    2326 log.go:172] (0xc000927600) Reply frame received for 5\nI0501 00:56:08.434817    2326 log.go:172] (0xc000927600) Data frame received for 5\nI0501 00:56:08.434844    2326 log.go:172] (0xc000798c80) (5) Data frame handling\nI0501 00:56:08.434856    2326 log.go:172] (0xc000798c80) (5) Data frame sent\nI0501 00:56:08.434864    2326 log.go:172] (0xc000927600) Data frame received for 5\nI0501 00:56:08.434870    2326 log.go:172] (0xc000798c80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0501 00:56:08.434891    2326 log.go:172] (0xc000927600) Data frame received for 3\nI0501 00:56:08.434900    2326 log.go:172] (0xc00080e5a0) (3) Data frame handling\nI0501 00:56:08.434908    2326 log.go:172] (0xc00080e5a0) (3) Data frame sent\nI0501 00:56:08.434915    2326 log.go:172] (0xc000927600) Data frame received for 3\nI0501 00:56:08.434923    2326 log.go:172] (0xc00080e5a0) (3) Data frame handling\nI0501 00:56:08.436610    2326 log.go:172] (0xc000927600) Data frame received for 1\nI0501 00:56:08.436632    2326 log.go:172] (0xc000be41e0) (1) Data frame handling\nI0501 00:56:08.436655    2326 log.go:172] (0xc000be41e0) (1) Data frame sent\nI0501 00:56:08.436677    2326 log.go:172] (0xc000927600) (0xc000be41e0) Stream removed, broadcasting: 1\nI0501 00:56:08.436699    2326 log.go:172] (0xc000927600) Go away received\nI0501 00:56:08.437307    2326 log.go:172] (0xc000927600) (0xc000be41e0) Stream removed, broadcasting: 1\nI0501 00:56:08.437359    2326 log.go:172] (0xc000927600) (0xc00080e5a0) Stream removed, broadcasting: 3\nI0501 00:56:08.437377    2326 log.go:172] (0xc000927600) (0xc000798c80) Stream removed, broadcasting: 5\n"
May  1 00:56:08.442: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  1 00:56:08.442: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  1 00:56:08.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:56:08.663: INFO: stderr: "I0501 00:56:08.579588    2346 log.go:172] (0xc000b951e0) (0xc000bd0320) Create stream\nI0501 00:56:08.579658    2346 log.go:172] (0xc000b951e0) (0xc000bd0320) Stream added, broadcasting: 1\nI0501 00:56:08.584795    2346 log.go:172] (0xc000b951e0) Reply frame received for 1\nI0501 00:56:08.584827    2346 log.go:172] (0xc000b951e0) (0xc000852000) Create stream\nI0501 00:56:08.584835    2346 log.go:172] (0xc000b951e0) (0xc000852000) Stream added, broadcasting: 3\nI0501 00:56:08.586031    2346 log.go:172] (0xc000b951e0) Reply frame received for 3\nI0501 00:56:08.586090    2346 log.go:172] (0xc000b951e0) (0xc000640640) Create stream\nI0501 00:56:08.586116    2346 log.go:172] (0xc000b951e0) (0xc000640640) Stream added, broadcasting: 5\nI0501 00:56:08.587288    2346 log.go:172] (0xc000b951e0) Reply frame received for 5\nI0501 00:56:08.657488    2346 log.go:172] (0xc000b951e0) Data frame received for 5\nI0501 00:56:08.657527    2346 log.go:172] (0xc000640640) (5) Data frame handling\nI0501 00:56:08.657536    2346 log.go:172] (0xc000640640) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0501 00:56:08.657553    2346 log.go:172] (0xc000b951e0) Data frame received for 3\nI0501 00:56:08.657586    2346 log.go:172] (0xc000852000) (3) Data frame handling\nI0501 00:56:08.657598    2346 log.go:172] (0xc000852000) (3) Data frame sent\nI0501 00:56:08.657621    2346 log.go:172] (0xc000b951e0) Data frame received for 5\nI0501 00:56:08.657634    2346 log.go:172] (0xc000640640) (5) Data frame handling\nI0501 00:56:08.657812    2346 log.go:172] (0xc000b951e0) Data frame received for 3\nI0501 00:56:08.657827    2346 log.go:172] (0xc000852000) (3) Data frame handling\nI0501 00:56:08.659415    2346 log.go:172] (0xc000b951e0) Data frame received for 1\nI0501 00:56:08.659428    2346 log.go:172] (0xc000bd0320) (1) Data frame handling\nI0501 00:56:08.659435    2346 log.go:172] (0xc000bd0320) (1) Data frame sent\nI0501 00:56:08.659447    2346 log.go:172] (0xc000b951e0) (0xc000bd0320) Stream removed, broadcasting: 1\nI0501 00:56:08.659496    2346 log.go:172] (0xc000b951e0) Go away received\nI0501 00:56:08.659733    2346 log.go:172] (0xc000b951e0) (0xc000bd0320) Stream removed, broadcasting: 1\nI0501 00:56:08.659753    2346 log.go:172] (0xc000b951e0) (0xc000852000) Stream removed, broadcasting: 3\nI0501 00:56:08.659762    2346 log.go:172] (0xc000b951e0) (0xc000640640) Stream removed, broadcasting: 5\n"
May  1 00:56:08.663: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  1 00:56:08.663: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  1 00:56:08.667: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
May  1 00:56:18.672: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
May  1 00:56:18.672: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
May  1 00:56:18.672: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
May  1 00:56:18.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  1 00:56:18.901: INFO: stderr: "I0501 00:56:18.821931    2368 log.go:172] (0xc0009d0e70) (0xc0008490e0) Create stream\nI0501 00:56:18.822006    2368 log.go:172] (0xc0009d0e70) (0xc0008490e0) Stream added, broadcasting: 1\nI0501 00:56:18.828055    2368 log.go:172] (0xc0009d0e70) Reply frame received for 1\nI0501 00:56:18.828686    2368 log.go:172] (0xc0009d0e70) (0xc0006c4c80) Create stream\nI0501 00:56:18.828761    2368 log.go:172] (0xc0009d0e70) (0xc0006c4c80) Stream added, broadcasting: 3\nI0501 00:56:18.830428    2368 log.go:172] (0xc0009d0e70) Reply frame received for 3\nI0501 00:56:18.830493    2368 log.go:172] (0xc0009d0e70) (0xc0000f2e60) Create stream\nI0501 00:56:18.830534    2368 log.go:172] (0xc0009d0e70) (0xc0000f2e60) Stream added, broadcasting: 5\nI0501 00:56:18.833016    2368 log.go:172] (0xc0009d0e70) Reply frame received for 5\nI0501 00:56:18.895393    2368 log.go:172] (0xc0009d0e70) Data frame received for 3\nI0501 00:56:18.895425    2368 log.go:172] (0xc0006c4c80) (3) Data frame handling\nI0501 00:56:18.895433    2368 log.go:172] (0xc0006c4c80) (3) Data frame sent\nI0501 00:56:18.895438    2368 log.go:172] (0xc0009d0e70) Data frame received for 3\nI0501 00:56:18.895443    2368 log.go:172] (0xc0006c4c80) (3) Data frame handling\nI0501 00:56:18.895464    2368 log.go:172] (0xc0009d0e70) Data frame received for 5\nI0501 00:56:18.895471    2368 log.go:172] (0xc0000f2e60) (5) Data frame handling\nI0501 00:56:18.895477    2368 log.go:172] (0xc0000f2e60) (5) Data frame sent\nI0501 00:56:18.895482    2368 log.go:172] (0xc0009d0e70) Data frame received for 5\nI0501 00:56:18.895486    2368 log.go:172] (0xc0000f2e60) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 00:56:18.896562    2368 log.go:172] (0xc0009d0e70) Data frame received for 1\nI0501 00:56:18.896575    2368 log.go:172] (0xc0008490e0) (1) Data frame handling\nI0501 00:56:18.896585    2368 log.go:172] (0xc0008490e0) (1) Data frame sent\nI0501 00:56:18.896598    2368 log.go:172] (0xc0009d0e70) (0xc0008490e0) Stream removed, broadcasting: 1\nI0501 00:56:18.896674    2368 log.go:172] (0xc0009d0e70) Go away received\nI0501 00:56:18.896841    2368 log.go:172] (0xc0009d0e70) (0xc0008490e0) Stream removed, broadcasting: 1\nI0501 00:56:18.896854    2368 log.go:172] (0xc0009d0e70) (0xc0006c4c80) Stream removed, broadcasting: 3\nI0501 00:56:18.896862    2368 log.go:172] (0xc0009d0e70) (0xc0000f2e60) Stream removed, broadcasting: 5\n"
May  1 00:56:18.901: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  1 00:56:18.901: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  1 00:56:18.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  1 00:56:19.144: INFO: stderr: "I0501 00:56:19.043910    2387 log.go:172] (0xc0009051e0) (0xc000b3a3c0) Create stream\nI0501 00:56:19.043984    2387 log.go:172] (0xc0009051e0) (0xc000b3a3c0) Stream added, broadcasting: 1\nI0501 00:56:19.048717    2387 log.go:172] (0xc0009051e0) Reply frame received for 1\nI0501 00:56:19.048758    2387 log.go:172] (0xc0009051e0) (0xc0005905a0) Create stream\nI0501 00:56:19.048785    2387 log.go:172] (0xc0009051e0) (0xc0005905a0) Stream added, broadcasting: 3\nI0501 00:56:19.050346    2387 log.go:172] (0xc0009051e0) Reply frame received for 3\nI0501 00:56:19.050396    2387 log.go:172] (0xc0009051e0) (0xc0005181e0) Create stream\nI0501 00:56:19.050409    2387 log.go:172] (0xc0009051e0) (0xc0005181e0) Stream added, broadcasting: 5\nI0501 00:56:19.051567    2387 log.go:172] (0xc0009051e0) Reply frame received for 5\nI0501 00:56:19.101923    2387 log.go:172] (0xc0009051e0) Data frame received for 5\nI0501 00:56:19.101952    2387 log.go:172] (0xc0005181e0) (5) Data frame handling\nI0501 00:56:19.101970    2387 log.go:172] (0xc0005181e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 00:56:19.136324    2387 log.go:172] (0xc0009051e0) Data frame received for 3\nI0501 00:56:19.136345    2387 log.go:172] (0xc0005905a0) (3) Data frame handling\nI0501 00:56:19.136353    2387 log.go:172] (0xc0005905a0) (3) Data frame sent\nI0501 00:56:19.136476    2387 log.go:172] (0xc0009051e0) Data frame received for 5\nI0501 00:56:19.136513    2387 log.go:172] (0xc0005181e0) (5) Data frame handling\nI0501 00:56:19.136537    2387 log.go:172] (0xc0009051e0) Data frame received for 3\nI0501 00:56:19.136551    2387 log.go:172] (0xc0005905a0) (3) Data frame handling\nI0501 00:56:19.138438    2387 log.go:172] (0xc0009051e0) Data frame received for 1\nI0501 00:56:19.138453    2387 log.go:172] (0xc000b3a3c0) (1) Data frame handling\nI0501 00:56:19.138460    2387 log.go:172] (0xc000b3a3c0) (1) Data frame sent\nI0501 00:56:19.138467    2387 log.go:172] (0xc0009051e0) (0xc000b3a3c0) Stream removed, broadcasting: 1\nI0501 00:56:19.138474    2387 log.go:172] (0xc0009051e0) Go away received\nI0501 00:56:19.138886    2387 log.go:172] (0xc0009051e0) (0xc000b3a3c0) Stream removed, broadcasting: 1\nI0501 00:56:19.138912    2387 log.go:172] (0xc0009051e0) (0xc0005905a0) Stream removed, broadcasting: 3\nI0501 00:56:19.138926    2387 log.go:172] (0xc0009051e0) (0xc0005181e0) Stream removed, broadcasting: 5\n"
May  1 00:56:19.144: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  1 00:56:19.144: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  1 00:56:19.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  1 00:56:19.420: INFO: stderr: "I0501 00:56:19.326788    2407 log.go:172] (0xc000a813f0) (0xc000ce45a0) Create stream\nI0501 00:56:19.326869    2407 log.go:172] (0xc000a813f0) (0xc000ce45a0) Stream added, broadcasting: 1\nI0501 00:56:19.334187    2407 log.go:172] (0xc000a813f0) Reply frame received for 1\nI0501 00:56:19.334225    2407 log.go:172] (0xc000a813f0) (0xc000bca3c0) Create stream\nI0501 00:56:19.334236    2407 log.go:172] (0xc000a813f0) (0xc000bca3c0) Stream added, broadcasting: 3\nI0501 00:56:19.335550    2407 log.go:172] (0xc000a813f0) Reply frame received for 3\nI0501 00:56:19.335583    2407 log.go:172] (0xc000a813f0) (0xc000ce4640) Create stream\nI0501 00:56:19.335597    2407 log.go:172] (0xc000a813f0) (0xc000ce4640) Stream added, broadcasting: 5\nI0501 00:56:19.336396    2407 log.go:172] (0xc000a813f0) Reply frame received for 5\nI0501 00:56:19.388777    2407 log.go:172] (0xc000a813f0) Data frame received for 5\nI0501 00:56:19.388818    2407 log.go:172] (0xc000ce4640) (5) Data frame handling\nI0501 00:56:19.388850    2407 log.go:172] (0xc000ce4640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 00:56:19.413042    2407 log.go:172] (0xc000a813f0) Data frame received for 5\nI0501 00:56:19.413082    2407 log.go:172] (0xc000a813f0) Data frame received for 3\nI0501 00:56:19.413227    2407 log.go:172] (0xc000bca3c0) (3) Data frame handling\nI0501 00:56:19.413242    2407 log.go:172] (0xc000bca3c0) (3) Data frame sent\nI0501 00:56:19.413265    2407 log.go:172] (0xc000ce4640) (5) Data frame handling\nI0501 00:56:19.413442    2407 log.go:172] (0xc000a813f0) Data frame received for 3\nI0501 00:56:19.413461    2407 log.go:172] (0xc000bca3c0) (3) Data frame handling\nI0501 00:56:19.414882    2407 log.go:172] (0xc000a813f0) Data frame received for 1\nI0501 00:56:19.414906    2407 log.go:172] (0xc000ce45a0) (1) Data frame handling\nI0501 00:56:19.414915    2407 log.go:172] (0xc000ce45a0) (1) Data frame sent\nI0501 00:56:19.414929    2407 log.go:172] (0xc000a813f0) (0xc000ce45a0) Stream removed, broadcasting: 1\nI0501 00:56:19.414945    2407 log.go:172] (0xc000a813f0) Go away received\nI0501 00:56:19.415253    2407 log.go:172] (0xc000a813f0) (0xc000ce45a0) Stream removed, broadcasting: 1\nI0501 00:56:19.415268    2407 log.go:172] (0xc000a813f0) (0xc000bca3c0) Stream removed, broadcasting: 3\nI0501 00:56:19.415275    2407 log.go:172] (0xc000a813f0) (0xc000ce4640) Stream removed, broadcasting: 5\n"
May  1 00:56:19.420: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  1 00:56:19.420: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  1 00:56:19.420: INFO: Waiting for statefulset status.replicas updated to 0
May  1 00:56:19.423: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
May  1 00:56:29.432: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
May  1 00:56:29.432: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
May  1 00:56:29.432: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
May  1 00:56:29.495: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
May  1 00:56:29.495: INFO: ss-0  latest-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  }]
May  1 00:56:29.495: INFO: ss-1  latest-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  }]
May  1 00:56:29.495: INFO: ss-2  latest-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  }]
May  1 00:56:29.496: INFO: 
May  1 00:56:29.496: INFO: StatefulSet ss has not reached scale 0, at 3
May  1 00:56:30.512: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
May  1 00:56:30.512: INFO: ss-0  latest-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  }]
May  1 00:56:30.512: INFO: ss-1  latest-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  }]
May  1 00:56:30.512: INFO: ss-2  latest-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  }]
May  1 00:56:30.512: INFO: 
May  1 00:56:30.512: INFO: StatefulSet ss has not reached scale 0, at 3
May  1 00:56:31.675: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
May  1 00:56:31.675: INFO: ss-0  latest-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  }]
May  1 00:56:31.675: INFO: ss-1  latest-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  }]
May  1 00:56:31.675: INFO: ss-2  latest-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  }]
May  1 00:56:31.675: INFO: 
May  1 00:56:31.675: INFO: StatefulSet ss has not reached scale 0, at 3
May  1 00:56:32.680: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
May  1 00:56:32.680: INFO: ss-0  latest-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  }]
May  1 00:56:32.681: INFO: ss-1  latest-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  }]
May  1 00:56:32.681: INFO: ss-2  latest-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  }]
May  1 00:56:32.681: INFO: 
May  1 00:56:32.681: INFO: StatefulSet ss has not reached scale 0, at 3
May  1 00:56:33.686: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
May  1 00:56:33.686: INFO: ss-0  latest-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  }]
May  1 00:56:33.686: INFO: ss-1  latest-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  }]
May  1 00:56:33.686: INFO: ss-2  latest-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  }]
May  1 00:56:33.686: INFO: 
May  1 00:56:33.686: INFO: StatefulSet ss has not reached scale 0, at 3
May  1 00:56:34.692: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
May  1 00:56:34.692: INFO: ss-0  latest-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  }]
May  1 00:56:34.692: INFO: ss-1  latest-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  }]
May  1 00:56:34.692: INFO: ss-2  latest-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:57 +0000 UTC  }]
May  1 00:56:34.692: INFO: 
May  1 00:56:34.692: INFO: StatefulSet ss has not reached scale 0, at 3
May  1 00:56:35.698: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
May  1 00:56:35.698: INFO: ss-0  latest-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  }]
May  1 00:56:35.698: INFO: 
May  1 00:56:35.698: INFO: StatefulSet ss has not reached scale 0, at 1
May  1 00:56:36.703: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
May  1 00:56:36.703: INFO: ss-0  latest-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  }]
May  1 00:56:36.704: INFO: 
May  1 00:56:36.704: INFO: StatefulSet ss has not reached scale 0, at 1
May  1 00:56:37.708: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
May  1 00:56:37.708: INFO: ss-0  latest-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  }]
May  1 00:56:37.708: INFO: 
May  1 00:56:37.708: INFO: StatefulSet ss has not reached scale 0, at 1
May  1 00:56:38.713: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
May  1 00:56:38.713: INFO: ss-0  latest-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:56:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 00:55:37 +0000 UTC  }]
May  1 00:56:38.713: INFO: 
May  1 00:56:38.713: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3836
May  1 00:56:39.718: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:56:39.855: INFO: rc: 1
May  1 00:56:39.855: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
May  1 00:56:49.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:56:49.957: INFO: rc: 1
May  1 00:56:49.957: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 00:56:59.957: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:57:00.067: INFO: rc: 1
May  1 00:57:00.067: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 00:57:10.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:57:10.172: INFO: rc: 1
May  1 00:57:10.172: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 00:57:20.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:57:20.280: INFO: rc: 1
May  1 00:57:20.280: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 00:57:30.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:57:30.406: INFO: rc: 1
May  1 00:57:30.407: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 00:57:40.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:57:40.525: INFO: rc: 1
May  1 00:57:40.525: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 00:57:50.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:57:50.632: INFO: rc: 1
May  1 00:57:50.632: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 00:58:00.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:58:00.738: INFO: rc: 1
May  1 00:58:00.738: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 00:58:10.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:58:10.836: INFO: rc: 1
May  1 00:58:10.836: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 00:58:20.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:58:20.941: INFO: rc: 1
May  1 00:58:20.941: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 00:58:30.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:58:31.061: INFO: rc: 1
May  1 00:58:31.062: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 00:58:41.062: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:58:41.175: INFO: rc: 1
May  1 00:58:41.175: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 00:58:51.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:58:51.283: INFO: rc: 1
May  1 00:58:51.284: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 00:59:01.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:59:01.376: INFO: rc: 1
May  1 00:59:01.376: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 00:59:11.376: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:59:11.487: INFO: rc: 1
May  1 00:59:11.487: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 00:59:21.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:59:21.584: INFO: rc: 1
May  1 00:59:21.584: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 00:59:31.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:59:31.711: INFO: rc: 1
May  1 00:59:31.711: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 00:59:41.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:59:41.801: INFO: rc: 1
May  1 00:59:41.801: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 00:59:51.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 00:59:51.900: INFO: rc: 1
May  1 00:59:51.900: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 01:00:01.900: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 01:00:02.014: INFO: rc: 1
May  1 01:00:02.014: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 01:00:12.014: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 01:00:12.117: INFO: rc: 1
May  1 01:00:12.118: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 01:00:22.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 01:00:22.224: INFO: rc: 1
May  1 01:00:22.224: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 01:00:32.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 01:00:32.332: INFO: rc: 1
May  1 01:00:32.332: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 01:00:42.332: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 01:00:42.438: INFO: rc: 1
May  1 01:00:42.438: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 01:00:52.438: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 01:00:52.542: INFO: rc: 1
May  1 01:00:52.542: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 01:01:02.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 01:01:02.646: INFO: rc: 1
May  1 01:01:02.646: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 01:01:12.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 01:01:12.753: INFO: rc: 1
May  1 01:01:12.753: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 01:01:22.754: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 01:01:22.877: INFO: rc: 1
May  1 01:01:22.877: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 01:01:32.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 01:01:32.991: INFO: rc: 1
May  1 01:01:32.991: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  1 01:01:42.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3836 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 01:01:45.946: INFO: rc: 1
May  1 01:01:45.946: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
May  1 01:01:45.946: INFO: Scaling statefulset ss to 0
May  1 01:01:45.954: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114
May  1 01:01:45.956: INFO: Deleting all statefulset in ns statefulset-3836
May  1 01:01:45.958: INFO: Scaling statefulset ss to 0
May  1 01:01:45.966: INFO: Waiting for statefulset status.replicas updated to 0
May  1 01:01:45.968: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:01:45.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3836" for this suite.

• [SLOW TEST:369.057 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":290,"completed":221,"skipped":3669,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:01:45.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name secret-emptykey-test-e3e50d31-4a1c-4b59-9054-d167492adb04
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:01:46.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3069" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":290,"completed":222,"skipped":3679,"failed":0}
SS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:01:46.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:03:46.231: INFO: Deleting pod "var-expansion-385bd93a-8a87-4c36-89c3-2960c3f2e6bb" in namespace "var-expansion-2730"
May  1 01:03:46.235: INFO: Wait up to 5m0s for pod "var-expansion-385bd93a-8a87-4c36-89c3-2960c3f2e6bb" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:03:48.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2730" for this suite.

• [SLOW TEST:122.239 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":290,"completed":223,"skipped":3681,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:03:48.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:03:48.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-5328" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":290,"completed":224,"skipped":3695,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:03:48.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:03:53.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-938" for this suite.

• [SLOW TEST:5.137 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":290,"completed":225,"skipped":3705,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:03:53.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:03:53.574: INFO: Creating ReplicaSet my-hostname-basic-cd153176-9e7a-4225-bdf7-0585f3aa30c9
May  1 01:03:53.628: INFO: Pod name my-hostname-basic-cd153176-9e7a-4225-bdf7-0585f3aa30c9: Found 0 pods out of 1
May  1 01:03:58.642: INFO: Pod name my-hostname-basic-cd153176-9e7a-4225-bdf7-0585f3aa30c9: Found 1 pods out of 1
May  1 01:03:58.642: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-cd153176-9e7a-4225-bdf7-0585f3aa30c9" is running
May  1 01:03:58.672: INFO: Pod "my-hostname-basic-cd153176-9e7a-4225-bdf7-0585f3aa30c9-q8q6l" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 01:03:53 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 01:03:56 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 01:03:56 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 01:03:53 +0000 UTC Reason: Message:}])
May  1 01:03:58.673: INFO: Trying to dial the pod
May  1 01:04:03.684: INFO: Controller my-hostname-basic-cd153176-9e7a-4225-bdf7-0585f3aa30c9: Got expected result from replica 1 [my-hostname-basic-cd153176-9e7a-4225-bdf7-0585f3aa30c9-q8q6l]: "my-hostname-basic-cd153176-9e7a-4225-bdf7-0585f3aa30c9-q8q6l", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:04:03.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1578" for this suite.

• [SLOW TEST:10.164 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":290,"completed":226,"skipped":3716,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:04:03.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-edff49bf-92fd-4d67-94fa-ba34492b0fc8
STEP: Creating a pod to test consume configMaps
May  1 01:04:03.811: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8829a0ac-ba90-4980-bc82-3620bc1b24ef" in namespace "projected-127" to be "Succeeded or Failed"
May  1 01:04:03.815: INFO: Pod "pod-projected-configmaps-8829a0ac-ba90-4980-bc82-3620bc1b24ef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.426259ms
May  1 01:04:05.819: INFO: Pod "pod-projected-configmaps-8829a0ac-ba90-4980-bc82-3620bc1b24ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007512056s
May  1 01:04:07.823: INFO: Pod "pod-projected-configmaps-8829a0ac-ba90-4980-bc82-3620bc1b24ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012080643s
STEP: Saw pod success
May  1 01:04:07.823: INFO: Pod "pod-projected-configmaps-8829a0ac-ba90-4980-bc82-3620bc1b24ef" satisfied condition "Succeeded or Failed"
May  1 01:04:07.827: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-8829a0ac-ba90-4980-bc82-3620bc1b24ef container projected-configmap-volume-test: 
STEP: delete the pod
May  1 01:04:07.865: INFO: Waiting for pod pod-projected-configmaps-8829a0ac-ba90-4980-bc82-3620bc1b24ef to disappear
May  1 01:04:07.881: INFO: Pod pod-projected-configmaps-8829a0ac-ba90-4980-bc82-3620bc1b24ef no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:04:07.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-127" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":227,"skipped":3730,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:04:07.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-secret-6lqx
STEP: Creating a pod to test atomic-volume-subpath
May  1 01:04:08.064: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-6lqx" in namespace "subpath-6729" to be "Succeeded or Failed"
May  1 01:04:08.082: INFO: Pod "pod-subpath-test-secret-6lqx": Phase="Pending", Reason="", readiness=false. Elapsed: 17.744756ms
May  1 01:04:10.086: INFO: Pod "pod-subpath-test-secret-6lqx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022028522s
May  1 01:04:12.091: INFO: Pod "pod-subpath-test-secret-6lqx": Phase="Running", Reason="", readiness=true. Elapsed: 4.026409143s
May  1 01:04:14.095: INFO: Pod "pod-subpath-test-secret-6lqx": Phase="Running", Reason="", readiness=true. Elapsed: 6.030325747s
May  1 01:04:16.100: INFO: Pod "pod-subpath-test-secret-6lqx": Phase="Running", Reason="", readiness=true. Elapsed: 8.035254984s
May  1 01:04:18.104: INFO: Pod "pod-subpath-test-secret-6lqx": Phase="Running", Reason="", readiness=true. Elapsed: 10.039603362s
May  1 01:04:20.120: INFO: Pod "pod-subpath-test-secret-6lqx": Phase="Running", Reason="", readiness=true. Elapsed: 12.055860821s
May  1 01:04:22.126: INFO: Pod "pod-subpath-test-secret-6lqx": Phase="Running", Reason="", readiness=true. Elapsed: 14.062107136s
May  1 01:04:24.131: INFO: Pod "pod-subpath-test-secret-6lqx": Phase="Running", Reason="", readiness=true. Elapsed: 16.066199303s
May  1 01:04:26.150: INFO: Pod "pod-subpath-test-secret-6lqx": Phase="Running", Reason="", readiness=true. Elapsed: 18.085690209s
May  1 01:04:28.153: INFO: Pod "pod-subpath-test-secret-6lqx": Phase="Running", Reason="", readiness=true. Elapsed: 20.088578741s
May  1 01:04:30.157: INFO: Pod "pod-subpath-test-secret-6lqx": Phase="Running", Reason="", readiness=true. Elapsed: 22.092508383s
May  1 01:04:32.161: INFO: Pod "pod-subpath-test-secret-6lqx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.096689921s
STEP: Saw pod success
May  1 01:04:32.161: INFO: Pod "pod-subpath-test-secret-6lqx" satisfied condition "Succeeded or Failed"
May  1 01:04:32.164: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-6lqx container test-container-subpath-secret-6lqx: 
STEP: delete the pod
May  1 01:04:32.213: INFO: Waiting for pod pod-subpath-test-secret-6lqx to disappear
May  1 01:04:32.229: INFO: Pod pod-subpath-test-secret-6lqx no longer exists
STEP: Deleting pod pod-subpath-test-secret-6lqx
May  1 01:04:32.229: INFO: Deleting pod "pod-subpath-test-secret-6lqx" in namespace "subpath-6729"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:04:32.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6729" for this suite.

• [SLOW TEST:24.351 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":290,"completed":228,"skipped":3738,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:04:32.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: starting the proxy server
May  1 01:04:32.315: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:04:32.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2765" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":290,"completed":229,"skipped":3791,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:04:32.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:04:32.532: INFO: >>> kubeConfig: /root/.kube/config
May  1 01:04:34.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
May  1 01:04:37.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7810 create -f -'
May  1 01:04:40.958: INFO: stderr: ""
May  1 01:04:40.958: INFO: stdout: "e2e-test-crd-publish-openapi-1241-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
May  1 01:04:40.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7810 delete e2e-test-crd-publish-openapi-1241-crds test-foo'
May  1 01:04:41.080: INFO: stderr: ""
May  1 01:04:41.080: INFO: stdout: "e2e-test-crd-publish-openapi-1241-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
May  1 01:04:41.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7810 apply -f -'
May  1 01:04:41.364: INFO: stderr: ""
May  1 01:04:41.364: INFO: stdout: "e2e-test-crd-publish-openapi-1241-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
May  1 01:04:41.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7810 delete e2e-test-crd-publish-openapi-1241-crds test-foo'
May  1 01:04:41.500: INFO: stderr: ""
May  1 01:04:41.500: INFO: stdout: "e2e-test-crd-publish-openapi-1241-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
May  1 01:04:41.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7810 create -f -'
May  1 01:04:41.749: INFO: rc: 1
May  1 01:04:41.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7810 apply -f -'
May  1 01:04:41.989: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
May  1 01:04:41.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7810 create -f -'
May  1 01:04:42.244: INFO: rc: 1
May  1 01:04:42.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7810 apply -f -'
May  1 01:04:42.528: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
May  1 01:04:42.528: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1241-crds'
May  1 01:04:42.760: INFO: stderr: ""
May  1 01:04:42.760: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1241-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
May  1 01:04:42.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1241-crds.metadata'
May  1 01:04:43.035: INFO: stderr: ""
May  1 01:04:43.036: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1241-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
May  1 01:04:43.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1241-crds.spec'
May  1 01:04:43.298: INFO: stderr: ""
May  1 01:04:43.298: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1241-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
May  1 01:04:43.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1241-crds.spec.bars'
May  1 01:04:43.590: INFO: stderr: ""
May  1 01:04:43.590: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1241-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works for CR with the same resource name as built-in object
May  1 01:04:43.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain ksvc.spec'
May  1 01:04:43.852: INFO: stderr: ""
May  1 01:04:43.852: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3000-crd\nVERSION:  crd-publish-openapi-test-service.example.com/v1alpha1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of CustomService\n\nFIELDS:\n   dummy\t\n     Dummy property.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
May  1 01:04:43.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1241-crds.spec.bars2'
May  1 01:04:44.092: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:04:48.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7810" for this suite.

• [SLOW TEST:16.511 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":290,"completed":230,"skipped":3808,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:04:48.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:04:49.036: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-b379cea7-f476-4d8a-ab2e-f8431e5b7c1b" in namespace "security-context-test-9308" to be "Succeeded or Failed"
May  1 01:04:49.039: INFO: Pod "busybox-readonly-false-b379cea7-f476-4d8a-ab2e-f8431e5b7c1b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.608645ms
May  1 01:04:51.043: INFO: Pod "busybox-readonly-false-b379cea7-f476-4d8a-ab2e-f8431e5b7c1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006981425s
May  1 01:04:53.047: INFO: Pod "busybox-readonly-false-b379cea7-f476-4d8a-ab2e-f8431e5b7c1b": Phase="Running", Reason="", readiness=true. Elapsed: 4.011171263s
May  1 01:04:55.051: INFO: Pod "busybox-readonly-false-b379cea7-f476-4d8a-ab2e-f8431e5b7c1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015228959s
May  1 01:04:55.051: INFO: Pod "busybox-readonly-false-b379cea7-f476-4d8a-ab2e-f8431e5b7c1b" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:04:55.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9308" for this suite.

• [SLOW TEST:6.150 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":290,"completed":231,"skipped":3815,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:04:55.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:04:55.143: INFO: Creating deployment "webserver-deployment"
May  1 01:04:55.158: INFO: Waiting for observed generation 1
May  1 01:04:57.184: INFO: Waiting for all required pods to come up
May  1 01:04:57.189: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
May  1 01:05:09.198: INFO: Waiting for deployment "webserver-deployment" to complete
May  1 01:05:09.203: INFO: Updating deployment "webserver-deployment" with a non-existent image
May  1 01:05:09.209: INFO: Updating deployment webserver-deployment
May  1 01:05:09.209: INFO: Waiting for observed generation 2
May  1 01:05:11.277: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
May  1 01:05:11.279: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
May  1 01:05:11.280: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
May  1 01:05:11.286: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
May  1 01:05:11.286: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
May  1 01:05:11.288: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
May  1 01:05:11.291: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
May  1 01:05:11.291: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
May  1 01:05:11.297: INFO: Updating deployment webserver-deployment
May  1 01:05:11.297: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
May  1 01:05:11.766: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
May  1 01:05:11.875: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71
May  1 01:05:12.169: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-4448 /apis/apps/v1/namespaces/deployment-4448/deployments/webserver-deployment 1616f62e-3ef2-46f8-a66f-3c97ea4b0633 466694 3 2020-05-01 01:04:55 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0036f8658  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-01 01:05:09 +0000 UTC,LastTransitionTime:2020-05-01 01:04:55 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-01 01:05:11 +0000 UTC,LastTransitionTime:2020-05-01 01:05:11 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

May  1 01:05:12.257: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4  deployment-4448 /apis/apps/v1/namespaces/deployment-4448/replicasets/webserver-deployment-6676bcd6d4 ffd125c8-5bc4-4d02-b756-d979fd6ebd36 466735 3 2020-05-01 01:05:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 1616f62e-3ef2-46f8-a66f-3c97ea4b0633 0xc0036f8af7 0xc0036f8af8}] []  [{kube-controller-manager Update apps/v1 2020-05-01 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1616f62e-3ef2-46f8-a66f-3c97ea4b0633\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0036f8b88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  1 01:05:12.257: INFO: All old ReplicaSets of Deployment "webserver-deployment":
May  1 01:05:12.257: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-4448 /apis/apps/v1/namespaces/deployment-4448/replicasets/webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 466729 3 2020-05-01 01:04:55 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 1616f62e-3ef2-46f8-a66f-3c97ea4b0633 0xc0036f8be7 0xc0036f8be8}] []  [{kube-controller-manager Update apps/v1 2020-05-01 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1616f62e-3ef2-46f8-a66f-3c97ea4b0633\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0036f8ca8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
May  1 01:05:12.324: INFO: Pod "webserver-deployment-6676bcd6d4-5fr6p" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-5fr6p webserver-deployment-6676bcd6d4- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-6676bcd6d4-5fr6p 87f3dda8-74e5-459b-b153-0e72065ea6b7 466666 0 2020-05-01 01:05:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ffd125c8-5bc4-4d02-b756-d979fd6ebd36 0xc004a5a207 0xc004a5a208}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd125c8-5bc4-4d02-b756-d979fd6ebd36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:05:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-01 01:05:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.324: INFO: Pod "webserver-deployment-6676bcd6d4-8mh66" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-8mh66 webserver-deployment-6676bcd6d4- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-6676bcd6d4-8mh66 ed4088b2-9a57-4c3c-8d9d-ae34f1b4da33 466640 0 2020-05-01 01:05:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ffd125c8-5bc4-4d02-b756-d979fd6ebd36 0xc004a5a3b0 0xc004a5a3b1}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd125c8-5bc4-4d02-b756-d979fd6ebd36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:05:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-01 01:05:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.324: INFO: Pod "webserver-deployment-6676bcd6d4-d8ghh" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-d8ghh webserver-deployment-6676bcd6d4- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-6676bcd6d4-d8ghh 37701c5a-7443-418f-a50c-9a706ea38298 466719 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ffd125c8-5bc4-4d02-b756-d979fd6ebd36 0xc004a5a5c0 0xc004a5a5c1}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd125c8-5bc4-4d02-b756-d979fd6ebd36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.324: INFO: Pod "webserver-deployment-6676bcd6d4-dmh2l" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-dmh2l webserver-deployment-6676bcd6d4- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-6676bcd6d4-dmh2l 0866436d-b343-4acf-bb5a-da7100b61676 466726 0 2020-05-01 01:05:12 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ffd125c8-5bc4-4d02-b756-d979fd6ebd36 0xc004a5a760 0xc004a5a761}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd125c8-5bc4-4d02-b756-d979fd6ebd36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.325: INFO: Pod "webserver-deployment-6676bcd6d4-fr5dq" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-fr5dq webserver-deployment-6676bcd6d4- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-6676bcd6d4-fr5dq 50292a57-3bc1-4445-9b10-1daa5f4bdd7e 466721 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ffd125c8-5bc4-4d02-b756-d979fd6ebd36 0xc004a5a910 0xc004a5a911}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd125c8-5bc4-4d02-b756-d979fd6ebd36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.325: INFO: Pod "webserver-deployment-6676bcd6d4-ghw4b" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-ghw4b webserver-deployment-6676bcd6d4- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-6676bcd6d4-ghw4b e787f906-cd3d-46e3-9743-33aad4bd4024 466692 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ffd125c8-5bc4-4d02-b756-d979fd6ebd36 0xc004a5aa60 0xc004a5aa61}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd125c8-5bc4-4d02-b756-d979fd6ebd36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.325: INFO: Pod "webserver-deployment-6676bcd6d4-jxqkt" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-jxqkt webserver-deployment-6676bcd6d4- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-6676bcd6d4-jxqkt 54b6f50f-5f75-4b0c-b4fd-32ace3e7482d 466697 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ffd125c8-5bc4-4d02-b756-d979fd6ebd36 0xc004a5ac20 0xc004a5ac21}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd125c8-5bc4-4d02-b756-d979fd6ebd36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.325: INFO: Pod "webserver-deployment-6676bcd6d4-lvcr8" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-lvcr8 webserver-deployment-6676bcd6d4- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-6676bcd6d4-lvcr8 1eb3de13-a3f0-4056-9186-94853e46fe9e 466723 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ffd125c8-5bc4-4d02-b756-d979fd6ebd36 0xc004a5ad90 0xc004a5ad91}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd125c8-5bc4-4d02-b756-d979fd6ebd36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.325: INFO: Pod "webserver-deployment-6676bcd6d4-n76cg" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-n76cg webserver-deployment-6676bcd6d4- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-6676bcd6d4-n76cg ec01dfd5-5e55-4624-8464-73e65bd25671 466646 0 2020-05-01 01:05:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ffd125c8-5bc4-4d02-b756-d979fd6ebd36 0xc004a5afb0 0xc004a5afb1}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd125c8-5bc4-4d02-b756-d979fd6ebd36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:05:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-01 01:05:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.325: INFO: Pod "webserver-deployment-6676bcd6d4-rfp9b" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-rfp9b webserver-deployment-6676bcd6d4- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-6676bcd6d4-rfp9b 21570e53-eceb-409e-a4ce-ea95ff3172f7 466660 0 2020-05-01 01:05:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ffd125c8-5bc4-4d02-b756-d979fd6ebd36 0xc004a5b1d0 0xc004a5b1d1}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd125c8-5bc4-4d02-b756-d979fd6ebd36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:05:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-01 01:05:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.326: INFO: Pod "webserver-deployment-6676bcd6d4-rwfd6" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-rwfd6 webserver-deployment-6676bcd6d4- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-6676bcd6d4-rwfd6 7f294fe6-1510-49c3-9290-1485653d4e50 466713 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ffd125c8-5bc4-4d02-b756-d979fd6ebd36 0xc004a5b420 0xc004a5b421}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd125c8-5bc4-4d02-b756-d979fd6ebd36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.326: INFO: Pod "webserver-deployment-6676bcd6d4-s9hx5" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-s9hx5 webserver-deployment-6676bcd6d4- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-6676bcd6d4-s9hx5 0dda0846-a2fd-4e71-a44f-282030852924 466704 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ffd125c8-5bc4-4d02-b756-d979fd6ebd36 0xc004a5b600 0xc004a5b601}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd125c8-5bc4-4d02-b756-d979fd6ebd36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.326: INFO: Pod "webserver-deployment-6676bcd6d4-zsz4d" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-zsz4d webserver-deployment-6676bcd6d4- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-6676bcd6d4-zsz4d 4a5e690c-ffd8-4bfc-9f40-88872b59b79a 466664 0 2020-05-01 01:05:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ffd125c8-5bc4-4d02-b756-d979fd6ebd36 0xc004a5b780 0xc004a5b781}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd125c8-5bc4-4d02-b756-d979fd6ebd36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:05:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-01 01:05:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.326: INFO: Pod "webserver-deployment-84855cf797-74xqz" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-74xqz webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-74xqz e4f234ef-0e51-4756-9f9b-4630fd91edf7 466564 0 2020-05-01 01:04:55 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc004a5b9b0 0xc004a5b9b1}] []  [{kube-controller-manager Update v1 2020-05-01 01:04:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:05:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.179\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:04:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:04:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.179,StartTime:2020-05-01 01:04:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 01:05:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ef4724d64e56ab4c4e9f6ca52a738aa464dfa54846ca2d0a4ad452db2402e97c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.179,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.326: INFO: Pod "webserver-deployment-84855cf797-8fk8g" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-8fk8g webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-8fk8g 3c963756-0686-45e4-8ba8-75f0c2024dc2 466718 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc004a5bbd7 0xc004a5bbd8}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.326: INFO: Pod "webserver-deployment-84855cf797-9jfct" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-9jfct webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-9jfct 87fefec6-0ccf-40cf-aec1-4089e0c0bcee 466590 0 2020-05-01 01:04:55 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc004a5bd20 0xc004a5bd21}] []  [{kube-controller-manager Update v1 2020-05-01 01:04:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:05:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.181\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:04:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:04:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.181,StartTime:2020-05-01 01:04:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 01:05:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://de46cf7528125ee487582a11d6c0b19ad62d61646a31abe8351b7f113fc5d248,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.181,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.326: INFO: Pod "webserver-deployment-84855cf797-b7w2r" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-b7w2r webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-b7w2r 419df1ee-9ed0-4079-9a20-840513851d87 466739 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc004a5bec7 0xc004a5bec8}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:05:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-01 01:05:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.326: INFO: Pod "webserver-deployment-84855cf797-cjzcw" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-cjzcw webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-cjzcw d205ddb0-50b2-4c66-a37d-4eb117336b00 466595 0 2020-05-01 01:04:55 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc002360087 0xc002360088}] []  [{kube-controller-manager Update v1 2020-05-01 01:04:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:05:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.180\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:04:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:04:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.180,StartTime:2020-05-01 01:04:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 01:05:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://49acdf41a1bccc50193895fb1d9baa07affb694feb05a07c29f408be0a356b1a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.180,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.327: INFO: Pod "webserver-deployment-84855cf797-cvq7c" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-cvq7c webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-cvq7c 27c9794f-89dd-463f-981a-ef30c46c177f 466700 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc0023607a7 0xc0023607a8}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.327: INFO: Pod "webserver-deployment-84855cf797-dq99d" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-dq99d webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-dq99d c723b2d3-c10c-419a-b8fd-799d70052600 466711 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc002360a80 0xc002360a81}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.327: INFO: Pod "webserver-deployment-84855cf797-h4tcz" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-h4tcz webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-h4tcz b7f4ce23-6877-4bdf-844b-db9868129e85 466724 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc002360d80 0xc002360d81}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.327: INFO: Pod "webserver-deployment-84855cf797-hrsjx" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-hrsjx webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-hrsjx 96d30a49-46b2-45d6-8c8d-cc4f861fbfdc 466551 0 2020-05-01 01:04:55 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc002360f10 0xc002360f11}] []  [{kube-controller-manager Update v1 2020-05-01 01:04:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:05:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.135\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:04:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:04:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.135,StartTime:2020-05-01 01:04:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 01:05:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fa5d436950a5812fc5572e5a304dd106b2a27ce5b9fd485d6a02b09587d3dc30,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.135,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.327: INFO: Pod "webserver-deployment-84855cf797-jh7cd" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-jh7cd webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-jh7cd 56238f6c-380d-4363-861a-6e07e9a11117 466733 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc0023616a7 0xc0023616a8}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:05:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-01 01:05:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.327: INFO: Pod "webserver-deployment-84855cf797-k2lzg" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-k2lzg webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-k2lzg 50a8cfa2-68c8-4aca-9534-16c580f6bd8e 466717 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc004984187 0xc004984188}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.327: INFO: Pod "webserver-deployment-84855cf797-khv5h" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-khv5h webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-khv5h 9c29c442-0c25-4e36-a213-fb700e25f573 466608 0 2020-05-01 01:04:55 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc004984310 0xc004984311}] []  [{kube-controller-manager Update v1 2020-05-01 01:04:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:05:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.182\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:04:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:04:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.182,StartTime:2020-05-01 01:04:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 01:05:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fb0b99351c02c88eff19e716813acc8ce7a1ebe34374b00bfdcc6ac34a52ac0f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.182,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.328: INFO: Pod "webserver-deployment-84855cf797-m5dk5" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-m5dk5 webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-m5dk5 55201b69-f9f4-4a69-bf25-9f66053fa9dc 466710 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc004984577 0xc004984578}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.328: INFO: Pod "webserver-deployment-84855cf797-nhzc4" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-nhzc4 webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-nhzc4 208fb017-670f-4c6e-924f-597a6b14af33 466722 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc004984720 0xc004984721}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.328: INFO: Pod "webserver-deployment-84855cf797-pzt92" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-pzt92 webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-pzt92 70b9020e-7670-4838-9f89-e6a7621f8b55 466561 0 2020-05-01 01:04:55 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc0049848f0 0xc0049848f1}] []  [{kube-controller-manager Update v1 2020-05-01 01:04:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:05:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.136\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:04:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:04:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.136,StartTime:2020-05-01 01:04:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 01:05:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://633dc164b6cc3690d68c6eace2be58b73d8b8281a357b11056c737633bf023a2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.136,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.328: INFO: Pod "webserver-deployment-84855cf797-qv9qg" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-qv9qg webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-qv9qg f43b24e4-dc7a-4a0b-96c2-6486334cb273 466538 0 2020-05-01 01:04:55 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc004984b37 0xc004984b38}] []  [{kube-controller-manager Update v1 2020-05-01 01:04:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:05:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.134\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:04:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:04:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.134,StartTime:2020-05-01 01:04:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 01:04:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6e10d1877359bbea3353389866dcc8132942742f4ec9120e5c092df9a8917ba5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.134,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.328: INFO: Pod "webserver-deployment-84855cf797-vkb2c" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-vkb2c webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-vkb2c 08d29f30-357e-42eb-83ea-9f3389ae7fe0 466605 0 2020-05-01 01:04:55 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc004984d67 0xc004984d68}] []  [{kube-controller-manager Update v1 2020-05-01 01:04:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:05:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.183\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:04:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:04:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.183,StartTime:2020-05-01 01:04:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 01:05:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7c825ed7e1d9c1f66c0ea4a1a7fdfe1fc4fa630d8ef134bc34df896f5a7164c4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.183,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.328: INFO: Pod "webserver-deployment-84855cf797-wc99q" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-wc99q webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-wc99q 79c0d26e-888f-4f00-b5d7-6d45261b6bad 466709 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc004984fa7 0xc004984fa8}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.328: INFO: Pod "webserver-deployment-84855cf797-zljdr" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-zljdr webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-zljdr c05bd012-314b-4270-8ad0-9fdffbf8dd88 466720 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc004985140 0xc004985141}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:05:12.328: INFO: Pod "webserver-deployment-84855cf797-znlv8" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-znlv8 webserver-deployment-84855cf797- deployment-4448 /api/v1/namespaces/deployment-4448/pods/webserver-deployment-84855cf797-znlv8 d438442f-df22-4796-b0fa-ae56664d2e0f 466728 0 2020-05-01 01:05:11 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 3943a149-195b-4677-a80c-643301cee4ca 0xc0049852b0 0xc0049852b1}] []  [{kube-controller-manager Update v1 2020-05-01 01:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3943a149-195b-4677-a80c-643301cee4ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:05:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvqfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvqfs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvqfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-01 01:05:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:05:12.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4448" for this suite.

• [SLOW TEST:17.516 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":290,"completed":232,"skipped":3834,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:05:12.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating service in namespace services-2195
STEP: creating service affinity-clusterip in namespace services-2195
STEP: creating replication controller affinity-clusterip in namespace services-2195
I0501 01:05:16.649760       7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-2195, replica count: 3
I0501 01:05:19.700197       7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0501 01:05:22.700407       7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0501 01:05:25.700644       7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0501 01:05:28.700899       7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0501 01:05:31.701361       7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0501 01:05:34.701553       7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0501 01:05:37.701784       7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May  1 01:05:37.852: INFO: Creating new exec pod
May  1 01:05:43.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2195 execpod-affinitycs9lg -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
May  1 01:05:43.338: INFO: stderr: "I0501 01:05:43.223981    3388 log.go:172] (0xc000ab4000) (0xc000546140) Create stream\nI0501 01:05:43.224054    3388 log.go:172] (0xc000ab4000) (0xc000546140) Stream added, broadcasting: 1\nI0501 01:05:43.227225    3388 log.go:172] (0xc000ab4000) Reply frame received for 1\nI0501 01:05:43.227270    3388 log.go:172] (0xc000ab4000) (0xc000345ea0) Create stream\nI0501 01:05:43.227285    3388 log.go:172] (0xc000ab4000) (0xc000345ea0) Stream added, broadcasting: 3\nI0501 01:05:43.228021    3388 log.go:172] (0xc000ab4000) Reply frame received for 3\nI0501 01:05:43.228053    3388 log.go:172] (0xc000ab4000) (0xc000434b40) Create stream\nI0501 01:05:43.228062    3388 log.go:172] (0xc000ab4000) (0xc000434b40) Stream added, broadcasting: 5\nI0501 01:05:43.229024    3388 log.go:172] (0xc000ab4000) Reply frame received for 5\nI0501 01:05:43.329520    3388 log.go:172] (0xc000ab4000) Data frame received for 5\nI0501 01:05:43.329554    3388 log.go:172] (0xc000434b40) (5) Data frame handling\nI0501 01:05:43.329569    3388 log.go:172] (0xc000434b40) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0501 01:05:43.330621    3388 log.go:172] (0xc000ab4000) Data frame received for 5\nI0501 01:05:43.330652    3388 log.go:172] (0xc000434b40) (5) Data frame handling\nI0501 01:05:43.330683    3388 log.go:172] (0xc000434b40) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0501 01:05:43.331257    3388 log.go:172] (0xc000ab4000) Data frame received for 3\nI0501 01:05:43.331307    3388 log.go:172] (0xc000ab4000) Data frame received for 5\nI0501 01:05:43.331337    3388 log.go:172] (0xc000434b40) (5) Data frame handling\nI0501 01:05:43.331359    3388 log.go:172] (0xc000345ea0) (3) Data frame handling\nI0501 01:05:43.333608    3388 log.go:172] (0xc000ab4000) Data frame received for 1\nI0501 01:05:43.333633    3388 log.go:172] (0xc000546140) (1) Data frame handling\nI0501 01:05:43.333648    3388 log.go:172] (0xc000546140) (1) Data frame sent\nI0501 01:05:43.333685    3388 log.go:172] (0xc000ab4000) (0xc000546140) Stream removed, broadcasting: 1\nI0501 01:05:43.333845    3388 log.go:172] (0xc000ab4000) Go away received\nI0501 01:05:43.334123    3388 log.go:172] (0xc000ab4000) (0xc000546140) Stream removed, broadcasting: 1\nI0501 01:05:43.334144    3388 log.go:172] (0xc000ab4000) (0xc000345ea0) Stream removed, broadcasting: 3\nI0501 01:05:43.334175    3388 log.go:172] (0xc000ab4000) (0xc000434b40) Stream removed, broadcasting: 5\n"
May  1 01:05:43.338: INFO: stdout: ""
May  1 01:05:43.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2195 execpod-affinitycs9lg -- /bin/sh -x -c nc -zv -t -w 2 10.108.95.232 80'
May  1 01:05:43.552: INFO: stderr: "I0501 01:05:43.475396    3408 log.go:172] (0xc000b4f6b0) (0xc00084cfa0) Create stream\nI0501 01:05:43.475449    3408 log.go:172] (0xc000b4f6b0) (0xc00084cfa0) Stream added, broadcasting: 1\nI0501 01:05:43.479271    3408 log.go:172] (0xc000b4f6b0) Reply frame received for 1\nI0501 01:05:43.479308    3408 log.go:172] (0xc000b4f6b0) (0xc0007d15e0) Create stream\nI0501 01:05:43.479317    3408 log.go:172] (0xc000b4f6b0) (0xc0007d15e0) Stream added, broadcasting: 3\nI0501 01:05:43.480054    3408 log.go:172] (0xc000b4f6b0) Reply frame received for 3\nI0501 01:05:43.480086    3408 log.go:172] (0xc000b4f6b0) (0xc0006fab40) Create stream\nI0501 01:05:43.480097    3408 log.go:172] (0xc000b4f6b0) (0xc0006fab40) Stream added, broadcasting: 5\nI0501 01:05:43.480830    3408 log.go:172] (0xc000b4f6b0) Reply frame received for 5\nI0501 01:05:43.543745    3408 log.go:172] (0xc000b4f6b0) Data frame received for 5\nI0501 01:05:43.543805    3408 log.go:172] (0xc0006fab40) (5) Data frame handling\nI0501 01:05:43.543831    3408 log.go:172] (0xc0006fab40) (5) Data frame sent\n+ nc -zv -t -w 2 10.108.95.232 80\nConnection to 10.108.95.232 80 port [tcp/http] succeeded!\nI0501 01:05:43.543859    3408 log.go:172] (0xc000b4f6b0) Data frame received for 3\nI0501 01:05:43.543877    3408 log.go:172] (0xc0007d15e0) (3) Data frame handling\nI0501 01:05:43.543918    3408 log.go:172] (0xc000b4f6b0) Data frame received for 5\nI0501 01:05:43.543951    3408 log.go:172] (0xc0006fab40) (5) Data frame handling\nI0501 01:05:43.545693    3408 log.go:172] (0xc000b4f6b0) Data frame received for 1\nI0501 01:05:43.545719    3408 log.go:172] (0xc00084cfa0) (1) Data frame handling\nI0501 01:05:43.545733    3408 log.go:172] (0xc00084cfa0) (1) Data frame sent\nI0501 01:05:43.545749    3408 log.go:172] (0xc000b4f6b0) (0xc00084cfa0) Stream removed, broadcasting: 1\nI0501 01:05:43.545838    3408 log.go:172] (0xc000b4f6b0) Go away received\nI0501 01:05:43.546110    3408 log.go:172] (0xc000b4f6b0) (0xc00084cfa0) Stream removed, broadcasting: 1\nI0501 01:05:43.546149    3408 log.go:172] (0xc000b4f6b0) (0xc0007d15e0) Stream removed, broadcasting: 3\nI0501 01:05:43.546177    3408 log.go:172] (0xc000b4f6b0) (0xc0006fab40) Stream removed, broadcasting: 5\n"
May  1 01:05:43.552: INFO: stdout: ""
May  1 01:05:43.552: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2195 execpod-affinitycs9lg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.108.95.232:80/ ; done'
May  1 01:05:43.864: INFO: stderr: "I0501 01:05:43.687373    3428 log.go:172] (0xc0000e8370) (0xc0003361e0) Create stream\nI0501 01:05:43.687448    3428 log.go:172] (0xc0000e8370) (0xc0003361e0) Stream added, broadcasting: 1\nI0501 01:05:43.690349    3428 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0501 01:05:43.690407    3428 log.go:172] (0xc0000e8370) (0xc000337400) Create stream\nI0501 01:05:43.690423    3428 log.go:172] (0xc0000e8370) (0xc000337400) Stream added, broadcasting: 3\nI0501 01:05:43.691452    3428 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0501 01:05:43.691485    3428 log.go:172] (0xc0000e8370) (0xc0003ecbe0) Create stream\nI0501 01:05:43.691497    3428 log.go:172] (0xc0000e8370) (0xc0003ecbe0) Stream added, broadcasting: 5\nI0501 01:05:43.692337    3428 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0501 01:05:43.761008    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.761070    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.761100    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.761283    3428 log.go:172] (0xc0000e8370) Data frame received for 5\nI0501 01:05:43.761320    3428 log.go:172] (0xc0003ecbe0) (5) Data frame handling\nI0501 01:05:43.761333    3428 log.go:172] (0xc0003ecbe0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.95.232:80/\nI0501 01:05:43.767820    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.767845    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.767873    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.768086    3428 log.go:172] (0xc0000e8370) Data frame received for 5\nI0501 01:05:43.768101    3428 log.go:172] (0xc0003ecbe0) (5) Data frame handling\nI0501 01:05:43.768108    3428 log.go:172] (0xc0003ecbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.95.232:80/\nI0501 01:05:43.768163    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.768186    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.768192    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.774385    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.774412    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.774439    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.774823    3428 log.go:172] (0xc0000e8370) Data frame received for 5\nI0501 01:05:43.774848    3428 log.go:172] (0xc0003ecbe0) (5) Data frame handling\nI0501 01:05:43.774864    3428 log.go:172] (0xc0003ecbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.95.232:80/\nI0501 01:05:43.774894    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.774915    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.774941    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.783445    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.783458    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.783463    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.784221    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.784254    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.784268    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.784291    3428 log.go:172] (0xc0000e8370) Data frame received for 5\nI0501 01:05:43.784305    3428 log.go:172] (0xc0003ecbe0) (5) Data frame handling\nI0501 01:05:43.784320    3428 log.go:172] (0xc0003ecbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.95.232:80/\nI0501 01:05:43.788944    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.788968    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.788988    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.789652    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.789673    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.789680    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.789701    3428 log.go:172] (0xc0000e8370) Data frame received for 5\nI0501 01:05:43.789733    3428 log.go:172] (0xc0003ecbe0) (5) Data frame handling\nI0501 01:05:43.789752    3428 log.go:172] (0xc0003ecbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.95.232:80/\nI0501 01:05:43.794400    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.794415    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.794428    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.794974    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.795026    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.795055    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.795086    3428 log.go:172] (0xc0000e8370) Data frame received for 5\nI0501 01:05:43.795103    3428 log.go:172] (0xc0003ecbe0) (5) Data frame handling\nI0501 01:05:43.795118    3428 log.go:172] (0xc0003ecbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.95.232:80/\nI0501 01:05:43.801861    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.801881    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.801894    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.802278    3428 log.go:172] (0xc0000e8370) Data frame received for 5\nI0501 01:05:43.802306    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.802333    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.802345    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.802356    3428 log.go:172] (0xc0003ecbe0) (5) Data frame handling\nI0501 01:05:43.802364    3428 log.go:172] (0xc0003ecbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.95.232:80/\nI0501 01:05:43.806267    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.806296    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.806321    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.806678    3428 log.go:172] (0xc0000e8370) Data frame received for 5\nI0501 01:05:43.806695    3428 log.go:172] (0xc0003ecbe0) (5) Data frame handling\nI0501 01:05:43.806710    3428 log.go:172] (0xc0003ecbe0) (5) Data frame sent\n+ echo\nI0501 01:05:43.806784    3428 log.go:172] (0xc0000e8370) Data frame received for 5\nI0501 01:05:43.806811    3428 log.go:172] (0xc0003ecbe0) (5) Data frame handling\nI0501 01:05:43.806833    3428 log.go:172] (0xc0003ecbe0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.108.95.232:80/\nI0501 01:05:43.806933    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.806959    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.806980    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.813698    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.813726    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.813759    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.814150    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.814183    3428 log.go:172] (0xc0000e8370) Data frame received for 5\nI0501 01:05:43.814225    3428 log.go:172] (0xc0003ecbe0) (5) Data frame handling\nI0501 01:05:43.814253    3428 log.go:172] (0xc0003ecbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.95.232:80/\nI0501 01:05:43.814279    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.814300    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.819338    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.819362    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.819387    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.820321    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.820358    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.820408    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.820439    3428 log.go:172] (0xc0000e8370) Data frame received for 5\nI0501 01:05:43.820484    3428 log.go:172] (0xc0003ecbe0) (5) Data frame handling\nI0501 01:05:43.820533    3428 log.go:172] (0xc0003ecbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.95.232:80/\nI0501 01:05:43.825732    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.825758    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.825774    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.826432    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.826451    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.826463    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.826479    3428 log.go:172] (0xc0000e8370) Data frame received for 5\nI0501 01:05:43.826499    3428 log.go:172] (0xc0003ecbe0) (5) Data frame handling\nI0501 01:05:43.826515    3428 log.go:172] (0xc0003ecbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.95.232:80/\nI0501 01:05:43.832523    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.832544    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.832566    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.832981    3428 log.go:172] (0xc0000e8370) Data frame received for 5\nI0501 01:05:43.833015    3428 log.go:172] (0xc0003ecbe0) (5) Data frame handling\nI0501 01:05:43.833030    3428 log.go:172] (0xc0003ecbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.95.232:80/\nI0501 01:05:43.833051    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.833065    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.833079    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.836935    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.836959    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.836980    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.837648    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.837689    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.837710    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.837735    3428 log.go:172] (0xc0000e8370) Data frame received for 5\nI0501 01:05:43.837782    3428 log.go:172] (0xc0003ecbe0) (5) Data frame handling\nI0501 01:05:43.837814    3428 log.go:172] (0xc0003ecbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.95.232:80/\nI0501 01:05:43.841665    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.841678    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.841685    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.842034    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.842078    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.842098    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.842133    3428 log.go:172] (0xc0000e8370) Data frame received for 5\nI0501 01:05:43.842146    3428 log.go:172] (0xc0003ecbe0) (5) Data frame handling\nI0501 01:05:43.842179    3428 log.go:172] (0xc0003ecbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.95.232:80/\nI0501 01:05:43.847831    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.847862    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.847888    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.848709    3428 log.go:172] (0xc0000e8370) Data frame received for 5\nI0501 01:05:43.848721    3428 log.go:172] (0xc0003ecbe0) (5) Data frame handling\nI0501 01:05:43.848728    3428 log.go:172] (0xc0003ecbe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.95.232:80/\nI0501 01:05:43.848877    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.848897    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.848918    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.851822    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.851837    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.851849    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.852167    3428 log.go:172] (0xc0000e8370) Data frame received for 5\nI0501 01:05:43.852187    3428 log.go:172] (0xc0003ecbe0) (5) Data frame handling\nI0501 01:05:43.852206    3428 log.go:172] (0xc0003ecbe0) (5) Data frame sent\nI0501 01:05:43.852221    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.852232    3428 log.go:172] (0xc000337400) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.95.232:80/\nI0501 01:05:43.852245    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.855441    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.855455    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.855467    3428 log.go:172] (0xc000337400) (3) Data frame sent\nI0501 01:05:43.856386    3428 log.go:172] (0xc0000e8370) Data frame received for 3\nI0501 01:05:43.856405    3428 log.go:172] (0xc000337400) (3) Data frame handling\nI0501 01:05:43.856692    3428 log.go:172] (0xc0000e8370) Data frame received for 5\nI0501 01:05:43.856710    3428 log.go:172] (0xc0003ecbe0) (5) Data frame handling\nI0501 01:05:43.858371    3428 log.go:172] (0xc0000e8370) Data frame received for 1\nI0501 01:05:43.858393    3428 log.go:172] (0xc0003361e0) (1) Data frame handling\nI0501 01:05:43.858423    3428 log.go:172] (0xc0003361e0) (1) Data frame sent\nI0501 01:05:43.858479    3428 log.go:172] (0xc0000e8370) (0xc0003361e0) Stream removed, broadcasting: 1\nI0501 01:05:43.858587    3428 log.go:172] (0xc0000e8370) Go away received\nI0501 01:05:43.858812    3428 log.go:172] (0xc0000e8370) (0xc0003361e0) Stream removed, broadcasting: 1\nI0501 01:05:43.858833    3428 log.go:172] (0xc0000e8370) (0xc000337400) Stream removed, broadcasting: 3\nI0501 01:05:43.858851    3428 log.go:172] (0xc0000e8370) (0xc0003ecbe0) Stream removed, broadcasting: 5\n"
May  1 01:05:43.865: INFO: stdout: "\naffinity-clusterip-6t7vv\naffinity-clusterip-6t7vv\naffinity-clusterip-6t7vv\naffinity-clusterip-6t7vv\naffinity-clusterip-6t7vv\naffinity-clusterip-6t7vv\naffinity-clusterip-6t7vv\naffinity-clusterip-6t7vv\naffinity-clusterip-6t7vv\naffinity-clusterip-6t7vv\naffinity-clusterip-6t7vv\naffinity-clusterip-6t7vv\naffinity-clusterip-6t7vv\naffinity-clusterip-6t7vv\naffinity-clusterip-6t7vv\naffinity-clusterip-6t7vv"
May  1 01:05:43.865: INFO: Received response from host: 
May  1 01:05:43.865: INFO: Received response from host: affinity-clusterip-6t7vv
May  1 01:05:43.865: INFO: Received response from host: affinity-clusterip-6t7vv
May  1 01:05:43.865: INFO: Received response from host: affinity-clusterip-6t7vv
May  1 01:05:43.865: INFO: Received response from host: affinity-clusterip-6t7vv
May  1 01:05:43.865: INFO: Received response from host: affinity-clusterip-6t7vv
May  1 01:05:43.865: INFO: Received response from host: affinity-clusterip-6t7vv
May  1 01:05:43.865: INFO: Received response from host: affinity-clusterip-6t7vv
May  1 01:05:43.865: INFO: Received response from host: affinity-clusterip-6t7vv
May  1 01:05:43.865: INFO: Received response from host: affinity-clusterip-6t7vv
May  1 01:05:43.865: INFO: Received response from host: affinity-clusterip-6t7vv
May  1 01:05:43.865: INFO: Received response from host: affinity-clusterip-6t7vv
May  1 01:05:43.865: INFO: Received response from host: affinity-clusterip-6t7vv
May  1 01:05:43.865: INFO: Received response from host: affinity-clusterip-6t7vv
May  1 01:05:43.865: INFO: Received response from host: affinity-clusterip-6t7vv
May  1 01:05:43.865: INFO: Received response from host: affinity-clusterip-6t7vv
May  1 01:05:43.865: INFO: Received response from host: affinity-clusterip-6t7vv
May  1 01:05:43.865: INFO: Cleaning up the exec pod
STEP: deleting ReplicationController affinity-clusterip in namespace services-2195, will wait for the garbage collector to delete the pods
May  1 01:05:43.975: INFO: Deleting ReplicationController affinity-clusterip took: 39.396608ms
May  1 01:05:45.076: INFO: Terminating ReplicationController affinity-clusterip pods took: 1.100264224s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:05:54.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2195" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:42.351 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":290,"completed":233,"skipped":3837,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:05:54.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:05:59.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1586" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":290,"completed":234,"skipped":3858,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:05:59.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:06:16.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2116" for this suite.

• [SLOW TEST:17.160 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":290,"completed":235,"skipped":3898,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:06:16.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
May  1 01:06:16.281: INFO: >>> kubeConfig: /root/.kube/config
May  1 01:06:19.286: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:06:29.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-206" for this suite.

• [SLOW TEST:13.029 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":290,"completed":236,"skipped":3915,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:06:29.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-4c781735-f7ea-4960-a779-f23ab558d8a9
STEP: Creating a pod to test consume configMaps
May  1 01:06:29.473: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7b4d53f1-d28a-4fbc-9794-9c0a2d775c2c" in namespace "projected-5790" to be "Succeeded or Failed"
May  1 01:06:29.507: INFO: Pod "pod-projected-configmaps-7b4d53f1-d28a-4fbc-9794-9c0a2d775c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 33.457167ms
May  1 01:06:31.590: INFO: Pod "pod-projected-configmaps-7b4d53f1-d28a-4fbc-9794-9c0a2d775c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116531946s
May  1 01:06:33.594: INFO: Pod "pod-projected-configmaps-7b4d53f1-d28a-4fbc-9794-9c0a2d775c2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121127424s
STEP: Saw pod success
May  1 01:06:33.595: INFO: Pod "pod-projected-configmaps-7b4d53f1-d28a-4fbc-9794-9c0a2d775c2c" satisfied condition "Succeeded or Failed"
May  1 01:06:33.598: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-7b4d53f1-d28a-4fbc-9794-9c0a2d775c2c container projected-configmap-volume-test: 
STEP: delete the pod
May  1 01:06:33.666: INFO: Waiting for pod pod-projected-configmaps-7b4d53f1-d28a-4fbc-9794-9c0a2d775c2c to disappear
May  1 01:06:33.684: INFO: Pod pod-projected-configmaps-7b4d53f1-d28a-4fbc-9794-9c0a2d775c2c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:06:33.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5790" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":290,"completed":237,"skipped":3921,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:06:33.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  1 01:06:34.563: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  1 01:06:36.574: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723891994, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723891994, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723891994, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723891994, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  1 01:06:39.611: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:06:39.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:06:40.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1560" for this suite.
STEP: Destroying namespace "webhook-1560-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.193 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":290,"completed":238,"skipped":3947,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:06:40.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  1 01:06:42.368: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  1 01:06:44.380: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892002, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892002, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892002, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892002, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  1 01:06:47.420: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
May  1 01:06:47.442: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:06:47.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2197" for this suite.
STEP: Destroying namespace "webhook-2197-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.635 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":290,"completed":239,"skipped":3973,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:06:47.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
May  1 01:06:47.718: INFO: Waiting up to 5m0s for pod "downward-api-9ca1af13-61cc-4c4c-9c03-965cbddf131e" in namespace "downward-api-8942" to be "Succeeded or Failed"
May  1 01:06:47.805: INFO: Pod "downward-api-9ca1af13-61cc-4c4c-9c03-965cbddf131e": Phase="Pending", Reason="", readiness=false. Elapsed: 86.150146ms
May  1 01:06:49.808: INFO: Pod "downward-api-9ca1af13-61cc-4c4c-9c03-965cbddf131e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090091478s
May  1 01:06:51.813: INFO: Pod "downward-api-9ca1af13-61cc-4c4c-9c03-965cbddf131e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094173272s
STEP: Saw pod success
May  1 01:06:51.813: INFO: Pod "downward-api-9ca1af13-61cc-4c4c-9c03-965cbddf131e" satisfied condition "Succeeded or Failed"
May  1 01:06:51.815: INFO: Trying to get logs from node latest-worker2 pod downward-api-9ca1af13-61cc-4c4c-9c03-965cbddf131e container dapi-container: 
STEP: delete the pod
May  1 01:06:51.873: INFO: Waiting for pod downward-api-9ca1af13-61cc-4c4c-9c03-965cbddf131e to disappear
May  1 01:06:51.882: INFO: Pod downward-api-9ca1af13-61cc-4c4c-9c03-965cbddf131e no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:06:51.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8942" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":290,"completed":240,"skipped":3996,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:06:51.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-tpf2
STEP: Creating a pod to test atomic-volume-subpath
May  1 01:06:52.010: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-tpf2" in namespace "subpath-8057" to be "Succeeded or Failed"
May  1 01:06:52.020: INFO: Pod "pod-subpath-test-configmap-tpf2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.275271ms
May  1 01:06:54.025: INFO: Pod "pod-subpath-test-configmap-tpf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014725326s
May  1 01:06:56.028: INFO: Pod "pod-subpath-test-configmap-tpf2": Phase="Running", Reason="", readiness=true. Elapsed: 4.018370444s
May  1 01:06:58.033: INFO: Pod "pod-subpath-test-configmap-tpf2": Phase="Running", Reason="", readiness=true. Elapsed: 6.023047799s
May  1 01:07:00.037: INFO: Pod "pod-subpath-test-configmap-tpf2": Phase="Running", Reason="", readiness=true. Elapsed: 8.027390667s
May  1 01:07:02.041: INFO: Pod "pod-subpath-test-configmap-tpf2": Phase="Running", Reason="", readiness=true. Elapsed: 10.031605355s
May  1 01:07:04.046: INFO: Pod "pod-subpath-test-configmap-tpf2": Phase="Running", Reason="", readiness=true. Elapsed: 12.036183435s
May  1 01:07:06.050: INFO: Pod "pod-subpath-test-configmap-tpf2": Phase="Running", Reason="", readiness=true. Elapsed: 14.040336216s
May  1 01:07:08.055: INFO: Pod "pod-subpath-test-configmap-tpf2": Phase="Running", Reason="", readiness=true. Elapsed: 16.044938071s
May  1 01:07:10.059: INFO: Pod "pod-subpath-test-configmap-tpf2": Phase="Running", Reason="", readiness=true. Elapsed: 18.04959115s
May  1 01:07:12.063: INFO: Pod "pod-subpath-test-configmap-tpf2": Phase="Running", Reason="", readiness=true. Elapsed: 20.053426298s
May  1 01:07:14.068: INFO: Pod "pod-subpath-test-configmap-tpf2": Phase="Running", Reason="", readiness=true. Elapsed: 22.057852604s
May  1 01:07:16.072: INFO: Pod "pod-subpath-test-configmap-tpf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.062260301s
STEP: Saw pod success
May  1 01:07:16.072: INFO: Pod "pod-subpath-test-configmap-tpf2" satisfied condition "Succeeded or Failed"
May  1 01:07:16.075: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-tpf2 container test-container-subpath-configmap-tpf2: 
STEP: delete the pod
May  1 01:07:16.107: INFO: Waiting for pod pod-subpath-test-configmap-tpf2 to disappear
May  1 01:07:16.135: INFO: Pod pod-subpath-test-configmap-tpf2 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-tpf2
May  1 01:07:16.135: INFO: Deleting pod "pod-subpath-test-configmap-tpf2" in namespace "subpath-8057"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:07:16.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8057" for this suite.

• [SLOW TEST:24.259 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":290,"completed":241,"skipped":4040,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:07:16.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:07:16.224: INFO: Pod name cleanup-pod: Found 0 pods out of 1
May  1 01:07:21.236: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
May  1 01:07:21.236: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71
May  1 01:07:21.265: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-5641 /apis/apps/v1/namespaces/deployment-5641/deployments/test-cleanup-deployment ea4864cc-1dd0-4bb0-8ee0-44deeb827a46 467696 1 2020-05-01 01:07:21 +0000 UTC   map[name:cleanup-pod] map[] [] []  [{e2e.test Update apps/v1 2020-05-01 01:07:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031e55c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

May  1 01:07:21.328: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694  deployment-5641 /apis/apps/v1/namespaces/deployment-5641/replicasets/test-cleanup-deployment-6688745694 755702af-25e6-458e-85fe-999a1a9aa013 467698 1 2020-05-01 01:07:21 +0000 UTC   map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment ea4864cc-1dd0-4bb0-8ee0-44deeb827a46 0xc0031e5a97 0xc0031e5a98}] []  [{kube-controller-manager Update apps/v1 2020-05-01 01:07:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ea4864cc-1dd0-4bb0-8ee0-44deeb827a46\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:6688745694] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031e5b28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  1 01:07:21.328: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
May  1 01:07:21.329: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-5641 /apis/apps/v1/namespaces/deployment-5641/replicasets/test-cleanup-controller 1684dd9b-a155-45b9-add2-0f93431f36c6 467697 1 2020-05-01 01:07:16 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment ea4864cc-1dd0-4bb0-8ee0-44deeb827a46 0xc0031e5977 0xc0031e5978}] []  [{e2e.test Update apps/v1 2020-05-01 01:07:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-01 01:07:21 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"ea4864cc-1dd0-4bb0-8ee0-44deeb827a46\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0031e5a28  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
May  1 01:07:21.495: INFO: Pod "test-cleanup-controller-gn2vj" is available:
&Pod{ObjectMeta:{test-cleanup-controller-gn2vj test-cleanup-controller- deployment-5641 /api/v1/namespaces/deployment-5641/pods/test-cleanup-controller-gn2vj 83024256-e82b-4d17-90d3-b79f19721f66 467684 0 2020-05-01 01:07:16 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 1684dd9b-a155-45b9-add2-0f93431f36c6 0xc00312f107 0xc00312f108}] []  [{kube-controller-manager Update v1 2020-05-01 01:07:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1684dd9b-a155-45b9-add2-0f93431f36c6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:07:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.158\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cprtj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cprtj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cprtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:07:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:07:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:07:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:07:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.158,StartTime:2020-05-01 01:07:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 01:07:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://10542deeda4d508a2c4a2c00c49e373c98006f9d4f1d0e30f7318b8a7469239f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.158,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 01:07:21.496: INFO: Pod "test-cleanup-deployment-6688745694-tpzk6" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-6688745694-tpzk6 test-cleanup-deployment-6688745694- deployment-5641 /api/v1/namespaces/deployment-5641/pods/test-cleanup-deployment-6688745694-tpzk6 36f6e033-a67d-48bf-9e53-9e2a3dee8632 467703 0 2020-05-01 01:07:21 +0000 UTC   map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 755702af-25e6-458e-85fe-999a1a9aa013 0xc00312f2c7 0xc00312f2c8}] []  [{kube-controller-manager Update v1 2020-05-01 01:07:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"755702af-25e6-458e-85fe-999a1a9aa013\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cprtj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cprtj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cprtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:07:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:07:21.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5641" for this suite.

• [SLOW TEST:5.388 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":290,"completed":242,"skipped":4058,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:07:21.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-367f3379-f2bb-4a02-a2b5-4f7c8656b0c8
STEP: Creating a pod to test consume secrets
May  1 01:07:21.656: INFO: Waiting up to 5m0s for pod "pod-secrets-5d5a2b96-0a3b-4b5d-8ffc-42bc31992514" in namespace "secrets-552" to be "Succeeded or Failed"
May  1 01:07:21.687: INFO: Pod "pod-secrets-5d5a2b96-0a3b-4b5d-8ffc-42bc31992514": Phase="Pending", Reason="", readiness=false. Elapsed: 31.355086ms
May  1 01:07:23.841: INFO: Pod "pod-secrets-5d5a2b96-0a3b-4b5d-8ffc-42bc31992514": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185556803s
May  1 01:07:25.998: INFO: Pod "pod-secrets-5d5a2b96-0a3b-4b5d-8ffc-42bc31992514": Phase="Running", Reason="", readiness=true. Elapsed: 4.342931006s
May  1 01:07:28.003: INFO: Pod "pod-secrets-5d5a2b96-0a3b-4b5d-8ffc-42bc31992514": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.347248447s
STEP: Saw pod success
May  1 01:07:28.003: INFO: Pod "pod-secrets-5d5a2b96-0a3b-4b5d-8ffc-42bc31992514" satisfied condition "Succeeded or Failed"
May  1 01:07:28.006: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-5d5a2b96-0a3b-4b5d-8ffc-42bc31992514 container secret-volume-test: 
STEP: delete the pod
May  1 01:07:28.045: INFO: Waiting for pod pod-secrets-5d5a2b96-0a3b-4b5d-8ffc-42bc31992514 to disappear
May  1 01:07:28.074: INFO: Pod pod-secrets-5d5a2b96-0a3b-4b5d-8ffc-42bc31992514 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:07:28.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-552" for this suite.

• [SLOW TEST:6.547 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":243,"skipped":4093,"failed":0}
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:07:28.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90
May  1 01:07:28.162: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May  1 01:07:28.174: INFO: Waiting for terminating namespaces to be deleted...
May  1 01:07:28.200: INFO: 
Logging pods the apiserver thinks is on node latest-worker before test
May  1 01:07:28.205: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded)
May  1 01:07:28.205: INFO: 	Container kindnet-cni ready: true, restart count 0
May  1 01:07:28.205: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded)
May  1 01:07:28.205: INFO: 	Container kube-proxy ready: true, restart count 0
May  1 01:07:28.205: INFO: 
Logging pods the apiserver thinks is on node latest-worker2 before test
May  1 01:07:28.210: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded)
May  1 01:07:28.210: INFO: 	Container kindnet-cni ready: true, restart count 0
May  1 01:07:28.210: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded)
May  1 01:07:28.210: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.160ac21cf6324c4e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:07:29.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6596" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":290,"completed":244,"skipped":4093,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:07:29.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May  1 01:07:29.341: INFO: Waiting up to 5m0s for pod "downwardapi-volume-54524e1f-9429-45ed-a785-dd786f7d1b59" in namespace "projected-7389" to be "Succeeded or Failed"
May  1 01:07:29.367: INFO: Pod "downwardapi-volume-54524e1f-9429-45ed-a785-dd786f7d1b59": Phase="Pending", Reason="", readiness=false. Elapsed: 25.929787ms
May  1 01:07:31.371: INFO: Pod "downwardapi-volume-54524e1f-9429-45ed-a785-dd786f7d1b59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030176442s
May  1 01:07:33.375: INFO: Pod "downwardapi-volume-54524e1f-9429-45ed-a785-dd786f7d1b59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033530753s
STEP: Saw pod success
May  1 01:07:33.375: INFO: Pod "downwardapi-volume-54524e1f-9429-45ed-a785-dd786f7d1b59" satisfied condition "Succeeded or Failed"
May  1 01:07:33.377: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-54524e1f-9429-45ed-a785-dd786f7d1b59 container client-container: 
STEP: delete the pod
May  1 01:07:33.411: INFO: Waiting for pod downwardapi-volume-54524e1f-9429-45ed-a785-dd786f7d1b59 to disappear
May  1 01:07:33.433: INFO: Pod downwardapi-volume-54524e1f-9429-45ed-a785-dd786f7d1b59 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:07:33.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7389" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":245,"skipped":4155,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:07:33.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:07:33.527: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-6ebd1b39-e05a-46e9-9055-74f0160817d8" in namespace "security-context-test-9296" to be "Succeeded or Failed"
May  1 01:07:33.536: INFO: Pod "alpine-nnp-false-6ebd1b39-e05a-46e9-9055-74f0160817d8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.184207ms
May  1 01:07:35.541: INFO: Pod "alpine-nnp-false-6ebd1b39-e05a-46e9-9055-74f0160817d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014261565s
May  1 01:07:37.545: INFO: Pod "alpine-nnp-false-6ebd1b39-e05a-46e9-9055-74f0160817d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018026195s
May  1 01:07:37.545: INFO: Pod "alpine-nnp-false-6ebd1b39-e05a-46e9-9055-74f0160817d8" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:07:37.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9296" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":246,"skipped":4180,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:07:37.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:07:37.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May  1 01:07:40.677: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3123 create -f -'
May  1 01:07:43.839: INFO: stderr: ""
May  1 01:07:43.839: INFO: stdout: "e2e-test-crd-publish-openapi-9525-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
May  1 01:07:43.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3123 delete e2e-test-crd-publish-openapi-9525-crds test-cr'
May  1 01:07:43.954: INFO: stderr: ""
May  1 01:07:43.954: INFO: stdout: "e2e-test-crd-publish-openapi-9525-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
May  1 01:07:43.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3123 apply -f -'
May  1 01:07:44.208: INFO: stderr: ""
May  1 01:07:44.208: INFO: stdout: "e2e-test-crd-publish-openapi-9525-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
May  1 01:07:44.208: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3123 delete e2e-test-crd-publish-openapi-9525-crds test-cr'
May  1 01:07:44.325: INFO: stderr: ""
May  1 01:07:44.325: INFO: stdout: "e2e-test-crd-publish-openapi-9525-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
May  1 01:07:44.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9525-crds'
May  1 01:07:44.600: INFO: stderr: ""
May  1 01:07:44.601: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9525-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:07:46.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3123" for this suite.

• [SLOW TEST:8.969 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":290,"completed":247,"skipped":4192,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:07:46.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  1 01:07:47.035: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  1 01:07:49.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892067, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892067, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892067, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892067, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  1 01:07:52.116: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:07:52.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2843-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:07:53.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4820" for this suite.
STEP: Destroying namespace "webhook-4820-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.066 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":290,"completed":248,"skipped":4195,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:07:53.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
May  1 01:07:53.819: INFO: Waiting up to 5m0s for pod "downward-api-524ef6ee-59db-4d75-a03e-c87e5c79c9e7" in namespace "downward-api-5997" to be "Succeeded or Failed"
May  1 01:07:53.915: INFO: Pod "downward-api-524ef6ee-59db-4d75-a03e-c87e5c79c9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 96.41131ms
May  1 01:07:55.921: INFO: Pod "downward-api-524ef6ee-59db-4d75-a03e-c87e5c79c9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101962578s
May  1 01:07:57.925: INFO: Pod "downward-api-524ef6ee-59db-4d75-a03e-c87e5c79c9e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106452868s
STEP: Saw pod success
May  1 01:07:57.925: INFO: Pod "downward-api-524ef6ee-59db-4d75-a03e-c87e5c79c9e7" satisfied condition "Succeeded or Failed"
May  1 01:07:57.929: INFO: Trying to get logs from node latest-worker2 pod downward-api-524ef6ee-59db-4d75-a03e-c87e5c79c9e7 container dapi-container: 
STEP: delete the pod
May  1 01:07:57.958: INFO: Waiting for pod downward-api-524ef6ee-59db-4d75-a03e-c87e5c79c9e7 to disappear
May  1 01:07:57.963: INFO: Pod downward-api-524ef6ee-59db-4d75-a03e-c87e5c79c9e7 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:07:57.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5997" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":290,"completed":249,"skipped":4206,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:07:57.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5609
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-5609
STEP: creating replication controller externalsvc in namespace services-5609
I0501 01:07:58.167827       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5609, replica count: 2
I0501 01:08:01.218179       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0501 01:08:04.218448       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
May  1 01:08:04.263: INFO: Creating new exec pod
May  1 01:08:08.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5609 execpodpfkth -- /bin/sh -x -c nslookup clusterip-service'
May  1 01:08:08.495: INFO: stderr: "I0501 01:08:08.427883    3556 log.go:172] (0xc0000ea370) (0xc00071af00) Create stream\nI0501 01:08:08.427961    3556 log.go:172] (0xc0000ea370) (0xc00071af00) Stream added, broadcasting: 1\nI0501 01:08:08.430040    3556 log.go:172] (0xc0000ea370) Reply frame received for 1\nI0501 01:08:08.430095    3556 log.go:172] (0xc0000ea370) (0xc0004da280) Create stream\nI0501 01:08:08.430109    3556 log.go:172] (0xc0000ea370) (0xc0004da280) Stream added, broadcasting: 3\nI0501 01:08:08.431220    3556 log.go:172] (0xc0000ea370) Reply frame received for 3\nI0501 01:08:08.431247    3556 log.go:172] (0xc0000ea370) (0xc00040adc0) Create stream\nI0501 01:08:08.431255    3556 log.go:172] (0xc0000ea370) (0xc00040adc0) Stream added, broadcasting: 5\nI0501 01:08:08.432221    3556 log.go:172] (0xc0000ea370) Reply frame received for 5\nI0501 01:08:08.477588    3556 log.go:172] (0xc0000ea370) Data frame received for 5\nI0501 01:08:08.477615    3556 log.go:172] (0xc00040adc0) (5) Data frame handling\nI0501 01:08:08.477637    3556 log.go:172] (0xc00040adc0) (5) Data frame sent\n+ nslookup clusterip-service\nI0501 01:08:08.485944    3556 log.go:172] (0xc0000ea370) Data frame received for 3\nI0501 01:08:08.485982    3556 log.go:172] (0xc0004da280) (3) Data frame handling\nI0501 01:08:08.486021    3556 log.go:172] (0xc0004da280) (3) Data frame sent\nI0501 01:08:08.487454    3556 log.go:172] (0xc0000ea370) Data frame received for 3\nI0501 01:08:08.487477    3556 log.go:172] (0xc0004da280) (3) Data frame handling\nI0501 01:08:08.487628    3556 log.go:172] (0xc0004da280) (3) Data frame sent\nI0501 01:08:08.488142    3556 log.go:172] (0xc0000ea370) Data frame received for 5\nI0501 01:08:08.488195    3556 log.go:172] (0xc00040adc0) (5) Data frame handling\nI0501 01:08:08.488433    3556 log.go:172] (0xc0000ea370) Data frame received for 3\nI0501 01:08:08.488452    3556 log.go:172] (0xc0004da280) (3) Data frame handling\nI0501 01:08:08.490529    3556 log.go:172] (0xc0000ea370) Data frame received for 1\nI0501 01:08:08.490558    3556 log.go:172] (0xc00071af00) (1) Data frame handling\nI0501 01:08:08.490578    3556 log.go:172] (0xc00071af00) (1) Data frame sent\nI0501 01:08:08.490613    3556 log.go:172] (0xc0000ea370) (0xc00071af00) Stream removed, broadcasting: 1\nI0501 01:08:08.490662    3556 log.go:172] (0xc0000ea370) Go away received\nI0501 01:08:08.491132    3556 log.go:172] (0xc0000ea370) (0xc00071af00) Stream removed, broadcasting: 1\nI0501 01:08:08.491156    3556 log.go:172] (0xc0000ea370) (0xc0004da280) Stream removed, broadcasting: 3\nI0501 01:08:08.491169    3556 log.go:172] (0xc0000ea370) (0xc00040adc0) Stream removed, broadcasting: 5\n"
May  1 01:08:08.495: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5609.svc.cluster.local\tcanonical name = externalsvc.services-5609.svc.cluster.local.\nName:\texternalsvc.services-5609.svc.cluster.local\nAddress: 10.99.74.112\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-5609, will wait for the garbage collector to delete the pods
May  1 01:08:08.557: INFO: Deleting ReplicationController externalsvc took: 7.744613ms
May  1 01:08:08.858: INFO: Terminating ReplicationController externalsvc pods took: 300.247066ms
May  1 01:08:13.638: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:08:13.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5609" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:15.704 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":290,"completed":250,"skipped":4224,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:08:13.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:09:13.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9055" for this suite.

• [SLOW TEST:60.085 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":290,"completed":251,"skipped":4281,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:09:13.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
May  1 01:09:13.868: INFO: Waiting up to 5m0s for pod "pod-1c5b0d32-0926-468e-921d-61e8192a0986" in namespace "emptydir-5438" to be "Succeeded or Failed"
May  1 01:09:13.887: INFO: Pod "pod-1c5b0d32-0926-468e-921d-61e8192a0986": Phase="Pending", Reason="", readiness=false. Elapsed: 19.123935ms
May  1 01:09:15.891: INFO: Pod "pod-1c5b0d32-0926-468e-921d-61e8192a0986": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022901253s
May  1 01:09:17.895: INFO: Pod "pod-1c5b0d32-0926-468e-921d-61e8192a0986": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027146959s
STEP: Saw pod success
May  1 01:09:17.895: INFO: Pod "pod-1c5b0d32-0926-468e-921d-61e8192a0986" satisfied condition "Succeeded or Failed"
May  1 01:09:17.898: INFO: Trying to get logs from node latest-worker pod pod-1c5b0d32-0926-468e-921d-61e8192a0986 container test-container: 
STEP: delete the pod
May  1 01:09:17.950: INFO: Waiting for pod pod-1c5b0d32-0926-468e-921d-61e8192a0986 to disappear
May  1 01:09:17.958: INFO: Pod pod-1c5b0d32-0926-468e-921d-61e8192a0986 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:09:17.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5438" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":252,"skipped":4290,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:09:17.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:09:18.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-7378
I0501 01:09:18.068217       7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7378, replica count: 1
I0501 01:09:19.118590       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0501 01:09:20.118777       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0501 01:09:21.119076       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0501 01:09:22.119350       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May  1 01:09:22.249: INFO: Created: latency-svc-4hb9t
May  1 01:09:22.311: INFO: Got endpoints: latency-svc-4hb9t [91.438131ms]
May  1 01:09:22.351: INFO: Created: latency-svc-98h8w
May  1 01:09:22.385: INFO: Got endpoints: latency-svc-98h8w [74.396807ms]
May  1 01:09:22.406: INFO: Created: latency-svc-shmlb
May  1 01:09:22.471: INFO: Got endpoints: latency-svc-shmlb [159.533096ms]
May  1 01:09:22.473: INFO: Created: latency-svc-k28mt
May  1 01:09:22.481: INFO: Got endpoints: latency-svc-k28mt [170.627798ms]
May  1 01:09:22.555: INFO: Created: latency-svc-dp6gb
May  1 01:09:22.620: INFO: Got endpoints: latency-svc-dp6gb [309.475258ms]
May  1 01:09:22.625: INFO: Created: latency-svc-thfln
May  1 01:09:22.638: INFO: Got endpoints: latency-svc-thfln [326.579029ms]
May  1 01:09:22.682: INFO: Created: latency-svc-85b74
May  1 01:09:22.698: INFO: Got endpoints: latency-svc-85b74 [386.866111ms]
May  1 01:09:22.717: INFO: Created: latency-svc-7n2nv
May  1 01:09:22.800: INFO: Got endpoints: latency-svc-7n2nv [488.598073ms]
May  1 01:09:22.804: INFO: Created: latency-svc-mlqrc
May  1 01:09:22.811: INFO: Got endpoints: latency-svc-mlqrc [500.536295ms]
May  1 01:09:22.838: INFO: Created: latency-svc-msxp4
May  1 01:09:22.854: INFO: Got endpoints: latency-svc-msxp4 [543.574565ms]
May  1 01:09:22.879: INFO: Created: latency-svc-2cxqn
May  1 01:09:22.896: INFO: Got endpoints: latency-svc-2cxqn [585.041888ms]
May  1 01:09:22.943: INFO: Created: latency-svc-2k4l8
May  1 01:09:22.956: INFO: Got endpoints: latency-svc-2k4l8 [645.567963ms]
May  1 01:09:22.975: INFO: Created: latency-svc-lcjkn
May  1 01:09:22.993: INFO: Got endpoints: latency-svc-lcjkn [681.550836ms]
May  1 01:09:23.011: INFO: Created: latency-svc-nhcf6
May  1 01:09:23.029: INFO: Got endpoints: latency-svc-nhcf6 [718.467785ms]
May  1 01:09:23.108: INFO: Created: latency-svc-rjqj9
May  1 01:09:23.125: INFO: Got endpoints: latency-svc-rjqj9 [814.321097ms]
May  1 01:09:23.143: INFO: Created: latency-svc-s6zkn
May  1 01:09:23.156: INFO: Got endpoints: latency-svc-s6zkn [844.624448ms]
May  1 01:09:23.173: INFO: Created: latency-svc-2tp5s
May  1 01:09:23.213: INFO: Got endpoints: latency-svc-2tp5s [828.238698ms]
May  1 01:09:23.239: INFO: Created: latency-svc-2g2tm
May  1 01:09:23.253: INFO: Got endpoints: latency-svc-2g2tm [781.978156ms]
May  1 01:09:23.275: INFO: Created: latency-svc-vzdg4
May  1 01:09:23.288: INFO: Got endpoints: latency-svc-vzdg4 [807.072419ms]
May  1 01:09:23.364: INFO: Created: latency-svc-jgd92
May  1 01:09:23.374: INFO: Got endpoints: latency-svc-jgd92 [753.509201ms]
May  1 01:09:23.407: INFO: Created: latency-svc-md6qr
May  1 01:09:23.420: INFO: Got endpoints: latency-svc-md6qr [782.656785ms]
May  1 01:09:23.506: INFO: Created: latency-svc-n6srp
May  1 01:09:23.539: INFO: Got endpoints: latency-svc-n6srp [841.183482ms]
May  1 01:09:23.540: INFO: Created: latency-svc-5g95b
May  1 01:09:23.582: INFO: Got endpoints: latency-svc-5g95b [782.22578ms]
May  1 01:09:23.668: INFO: Created: latency-svc-qdxkb
May  1 01:09:23.707: INFO: Got endpoints: latency-svc-qdxkb [895.802186ms]
May  1 01:09:23.709: INFO: Created: latency-svc-2lwmk
May  1 01:09:23.721: INFO: Got endpoints: latency-svc-2lwmk [867.125325ms]
May  1 01:09:23.800: INFO: Created: latency-svc-jhmfl
May  1 01:09:23.817: INFO: Got endpoints: latency-svc-jhmfl [921.10128ms]
May  1 01:09:23.839: INFO: Created: latency-svc-pd9ph
May  1 01:09:23.854: INFO: Got endpoints: latency-svc-pd9ph [897.296213ms]
May  1 01:09:23.875: INFO: Created: latency-svc-glxdh
May  1 01:09:23.891: INFO: Got endpoints: latency-svc-glxdh [898.196925ms]
May  1 01:09:23.955: INFO: Created: latency-svc-78z9p
May  1 01:09:23.962: INFO: Got endpoints: latency-svc-78z9p [932.636332ms]
May  1 01:09:24.022: INFO: Created: latency-svc-m5xjk
May  1 01:09:24.040: INFO: Got endpoints: latency-svc-m5xjk [914.676074ms]
May  1 01:09:24.093: INFO: Created: latency-svc-dtfwv
May  1 01:09:24.115: INFO: Got endpoints: latency-svc-dtfwv [958.8239ms]
May  1 01:09:24.151: INFO: Created: latency-svc-qjfjd
May  1 01:09:24.167: INFO: Got endpoints: latency-svc-qjfjd [953.442378ms]
May  1 01:09:24.230: INFO: Created: latency-svc-mtpb6
May  1 01:09:24.239: INFO: Got endpoints: latency-svc-mtpb6 [985.841037ms]
May  1 01:09:24.289: INFO: Created: latency-svc-vbbv9
May  1 01:09:24.325: INFO: Got endpoints: latency-svc-vbbv9 [1.03636387s]
May  1 01:09:24.407: INFO: Created: latency-svc-j48k6
May  1 01:09:24.414: INFO: Got endpoints: latency-svc-j48k6 [1.040491774s]
May  1 01:09:24.470: INFO: Created: latency-svc-tfps6
May  1 01:09:24.487: INFO: Got endpoints: latency-svc-tfps6 [1.066302177s]
May  1 01:09:24.542: INFO: Created: latency-svc-gwpcb
May  1 01:09:24.571: INFO: Got endpoints: latency-svc-gwpcb [1.031279759s]
May  1 01:09:24.601: INFO: Created: latency-svc-gf75w
May  1 01:09:24.624: INFO: Got endpoints: latency-svc-gf75w [1.042468078s]
May  1 01:09:24.715: INFO: Created: latency-svc-r4rwk
May  1 01:09:24.739: INFO: Got endpoints: latency-svc-r4rwk [1.031303572s]
May  1 01:09:24.792: INFO: Created: latency-svc-g5g4j
May  1 01:09:24.859: INFO: Got endpoints: latency-svc-g5g4j [1.137760673s]
May  1 01:09:24.918: INFO: Created: latency-svc-2ncb5
May  1 01:09:24.936: INFO: Got endpoints: latency-svc-2ncb5 [1.118950945s]
May  1 01:09:25.021: INFO: Created: latency-svc-8vphj
May  1 01:09:25.033: INFO: Got endpoints: latency-svc-8vphj [1.178884967s]
May  1 01:09:25.087: INFO: Created: latency-svc-rrw87
May  1 01:09:25.093: INFO: Got endpoints: latency-svc-rrw87 [1.201911597s]
May  1 01:09:25.165: INFO: Created: latency-svc-s6tb6
May  1 01:09:25.170: INFO: Got endpoints: latency-svc-s6tb6 [1.208258741s]
May  1 01:09:25.206: INFO: Created: latency-svc-k787h
May  1 01:09:25.213: INFO: Got endpoints: latency-svc-k787h [1.172951269s]
May  1 01:09:25.296: INFO: Created: latency-svc-m82b8
May  1 01:09:25.392: INFO: Got endpoints: latency-svc-m82b8 [1.277863799s]
May  1 01:09:25.464: INFO: Created: latency-svc-xx79k
May  1 01:09:25.477: INFO: Got endpoints: latency-svc-xx79k [1.310311767s]
May  1 01:09:25.507: INFO: Created: latency-svc-2mccz
May  1 01:09:25.520: INFO: Got endpoints: latency-svc-2mccz [1.281500941s]
May  1 01:09:25.561: INFO: Created: latency-svc-mt5rs
May  1 01:09:25.626: INFO: Got endpoints: latency-svc-mt5rs [1.301450949s]
May  1 01:09:25.630: INFO: Created: latency-svc-qklqd
May  1 01:09:25.640: INFO: Got endpoints: latency-svc-qklqd [1.225385563s]
May  1 01:09:25.800: INFO: Created: latency-svc-mtnlf
May  1 01:09:25.831: INFO: Got endpoints: latency-svc-mtnlf [1.344313856s]
May  1 01:09:25.860: INFO: Created: latency-svc-zhhjb
May  1 01:09:25.877: INFO: Got endpoints: latency-svc-zhhjb [1.306337534s]
May  1 01:09:25.974: INFO: Created: latency-svc-w2hvh
May  1 01:09:25.977: INFO: Got endpoints: latency-svc-w2hvh [1.352100305s]
May  1 01:09:26.041: INFO: Created: latency-svc-mvqs6
May  1 01:09:26.067: INFO: Got endpoints: latency-svc-mvqs6 [1.328345955s]
May  1 01:09:26.155: INFO: Created: latency-svc-tdxnm
May  1 01:09:26.181: INFO: Got endpoints: latency-svc-tdxnm [1.321466986s]
May  1 01:09:26.233: INFO: Created: latency-svc-nttsx
May  1 01:09:26.297: INFO: Got endpoints: latency-svc-nttsx [1.360667254s]
May  1 01:09:26.328: INFO: Created: latency-svc-c5jx8
May  1 01:09:26.343: INFO: Got endpoints: latency-svc-c5jx8 [1.310501126s]
May  1 01:09:26.370: INFO: Created: latency-svc-x7zgt
May  1 01:09:26.386: INFO: Got endpoints: latency-svc-x7zgt [1.292777051s]
May  1 01:09:26.476: INFO: Created: latency-svc-xrg8l
May  1 01:09:26.481: INFO: Got endpoints: latency-svc-xrg8l [1.3100594s]
May  1 01:09:26.514: INFO: Created: latency-svc-r5djx
May  1 01:09:26.530: INFO: Got endpoints: latency-svc-r5djx [1.317176386s]
May  1 01:09:26.557: INFO: Created: latency-svc-668xh
May  1 01:09:26.572: INFO: Got endpoints: latency-svc-668xh [1.179311917s]
May  1 01:09:26.634: INFO: Created: latency-svc-7kzfb
May  1 01:09:26.644: INFO: Got endpoints: latency-svc-7kzfb [1.167038717s]
May  1 01:09:26.670: INFO: Created: latency-svc-zj6nc
May  1 01:09:26.687: INFO: Got endpoints: latency-svc-zj6nc [1.16678603s]
May  1 01:09:26.799: INFO: Created: latency-svc-svfz6
May  1 01:09:26.820: INFO: Got endpoints: latency-svc-svfz6 [1.193488431s]
May  1 01:09:26.850: INFO: Created: latency-svc-km9cj
May  1 01:09:26.868: INFO: Got endpoints: latency-svc-km9cj [1.227911948s]
May  1 01:09:26.893: INFO: Created: latency-svc-5qpdt
May  1 01:09:26.938: INFO: Got endpoints: latency-svc-5qpdt [1.106768477s]
May  1 01:09:26.970: INFO: Created: latency-svc-msm7d
May  1 01:09:26.987: INFO: Got endpoints: latency-svc-msm7d [1.110358604s]
May  1 01:09:27.006: INFO: Created: latency-svc-x6zbh
May  1 01:09:27.018: INFO: Got endpoints: latency-svc-x6zbh [1.041015113s]
May  1 01:09:27.036: INFO: Created: latency-svc-p4d5h
May  1 01:09:27.105: INFO: Got endpoints: latency-svc-p4d5h [1.038008099s]
May  1 01:09:27.107: INFO: Created: latency-svc-dblt2
May  1 01:09:27.121: INFO: Got endpoints: latency-svc-dblt2 [940.312843ms]
May  1 01:09:27.138: INFO: Created: latency-svc-dbv8g
May  1 01:09:27.156: INFO: Got endpoints: latency-svc-dbv8g [859.514724ms]
May  1 01:09:27.204: INFO: Created: latency-svc-nzpwg
May  1 01:09:27.279: INFO: Got endpoints: latency-svc-nzpwg [935.784192ms]
May  1 01:09:27.281: INFO: Created: latency-svc-nbvnz
May  1 01:09:27.288: INFO: Got endpoints: latency-svc-nbvnz [902.832885ms]
May  1 01:09:27.318: INFO: Created: latency-svc-8vslt
May  1 01:09:27.331: INFO: Got endpoints: latency-svc-8vslt [850.178293ms]
May  1 01:09:27.358: INFO: Created: latency-svc-m9hlg
May  1 01:09:27.465: INFO: Got endpoints: latency-svc-m9hlg [934.794518ms]
May  1 01:09:27.472: INFO: Created: latency-svc-cwqtr
May  1 01:09:27.491: INFO: Got endpoints: latency-svc-cwqtr [919.565634ms]
May  1 01:09:27.522: INFO: Created: latency-svc-67vsg
May  1 01:09:27.552: INFO: Got endpoints: latency-svc-67vsg [907.162006ms]
May  1 01:09:27.620: INFO: Created: latency-svc-48tm8
May  1 01:09:27.661: INFO: Got endpoints: latency-svc-48tm8 [973.883313ms]
May  1 01:09:27.702: INFO: Created: latency-svc-rzgjn
May  1 01:09:27.776: INFO: Got endpoints: latency-svc-rzgjn [955.581526ms]
May  1 01:09:27.804: INFO: Created: latency-svc-t8gx2
May  1 01:09:27.834: INFO: Got endpoints: latency-svc-t8gx2 [966.738435ms]
May  1 01:09:27.967: INFO: Created: latency-svc-vf5xg
May  1 01:09:27.972: INFO: Got endpoints: latency-svc-vf5xg [1.034089009s]
May  1 01:09:28.002: INFO: Created: latency-svc-dt9tl
May  1 01:09:28.023: INFO: Got endpoints: latency-svc-dt9tl [1.035977987s]
May  1 01:09:28.178: INFO: Created: latency-svc-pt9fg
May  1 01:09:28.182: INFO: Got endpoints: latency-svc-pt9fg [1.164021043s]
May  1 01:09:28.230: INFO: Created: latency-svc-5zlh7
May  1 01:09:28.246: INFO: Got endpoints: latency-svc-5zlh7 [1.141184842s]
May  1 01:09:28.332: INFO: Created: latency-svc-wx6xb
May  1 01:09:28.341: INFO: Got endpoints: latency-svc-wx6xb [1.21976532s]
May  1 01:09:28.362: INFO: Created: latency-svc-xczbb
May  1 01:09:28.377: INFO: Got endpoints: latency-svc-xczbb [1.220779669s]
May  1 01:09:28.398: INFO: Created: latency-svc-ldctw
May  1 01:09:28.414: INFO: Got endpoints: latency-svc-ldctw [1.134939928s]
May  1 01:09:28.482: INFO: Created: latency-svc-qt2f4
May  1 01:09:28.512: INFO: Got endpoints: latency-svc-qt2f4 [1.223924626s]
May  1 01:09:28.514: INFO: Created: latency-svc-6s7sw
May  1 01:09:28.530: INFO: Got endpoints: latency-svc-6s7sw [1.199323733s]
May  1 01:09:28.550: INFO: Created: latency-svc-jqc77
May  1 01:09:28.566: INFO: Got endpoints: latency-svc-jqc77 [1.100507456s]
May  1 01:09:28.662: INFO: Created: latency-svc-p5wsw
May  1 01:09:28.716: INFO: Got endpoints: latency-svc-p5wsw [1.224149931s]
May  1 01:09:28.716: INFO: Created: latency-svc-s4dhx
May  1 01:09:28.733: INFO: Got endpoints: latency-svc-s4dhx [1.181321763s]
May  1 01:09:28.830: INFO: Created: latency-svc-gkvz7
May  1 01:09:28.854: INFO: Got endpoints: latency-svc-gkvz7 [1.192453572s]
May  1 01:09:28.855: INFO: Created: latency-svc-dkxnc
May  1 01:09:28.885: INFO: Got endpoints: latency-svc-dkxnc [1.109233254s]
May  1 01:09:28.985: INFO: Created: latency-svc-qfqkj
May  1 01:09:28.991: INFO: Got endpoints: latency-svc-qfqkj [1.156807849s]
May  1 01:09:29.022: INFO: Created: latency-svc-sc98t
May  1 01:09:29.033: INFO: Got endpoints: latency-svc-sc98t [1.061549789s]
May  1 01:09:29.064: INFO: Created: latency-svc-pskdm
May  1 01:09:29.083: INFO: Got endpoints: latency-svc-pskdm [1.059843187s]
May  1 01:09:29.140: INFO: Created: latency-svc-clrjq
May  1 01:09:29.159: INFO: Got endpoints: latency-svc-clrjq [977.430452ms]
May  1 01:09:29.203: INFO: Created: latency-svc-n4qjx
May  1 01:09:29.215: INFO: Got endpoints: latency-svc-n4qjx [968.38694ms]
May  1 01:09:29.279: INFO: Created: latency-svc-dlhbw
May  1 01:09:29.282: INFO: Got endpoints: latency-svc-dlhbw [940.953957ms]
May  1 01:09:29.340: INFO: Created: latency-svc-p96bh
May  1 01:09:29.353: INFO: Got endpoints: latency-svc-p96bh [975.562568ms]
May  1 01:09:29.375: INFO: Created: latency-svc-x7bg4
May  1 01:09:29.446: INFO: Got endpoints: latency-svc-x7bg4 [1.032211662s]
May  1 01:09:29.448: INFO: Created: latency-svc-h4nqf
May  1 01:09:29.461: INFO: Got endpoints: latency-svc-h4nqf [948.839607ms]
May  1 01:09:29.486: INFO: Created: latency-svc-7tlnn
May  1 01:09:29.498: INFO: Got endpoints: latency-svc-7tlnn [967.802424ms]
May  1 01:09:29.520: INFO: Created: latency-svc-72k9p
May  1 01:09:29.535: INFO: Got endpoints: latency-svc-72k9p [969.588377ms]
May  1 01:09:29.578: INFO: Created: latency-svc-xqbpb
May  1 01:09:29.610: INFO: Created: latency-svc-wdr97
May  1 01:09:29.610: INFO: Got endpoints: latency-svc-xqbpb [894.268042ms]
May  1 01:09:29.640: INFO: Got endpoints: latency-svc-wdr97 [907.063328ms]
May  1 01:09:29.669: INFO: Created: latency-svc-prvvx
May  1 01:09:29.734: INFO: Got endpoints: latency-svc-prvvx [880.473541ms]
May  1 01:09:29.736: INFO: Created: latency-svc-w64tq
May  1 01:09:29.745: INFO: Got endpoints: latency-svc-w64tq [860.322267ms]
May  1 01:09:29.777: INFO: Created: latency-svc-xqpxw
May  1 01:09:29.806: INFO: Got endpoints: latency-svc-xqpxw [815.027524ms]
May  1 01:09:29.832: INFO: Created: latency-svc-qk5gt
May  1 01:09:29.873: INFO: Got endpoints: latency-svc-qk5gt [839.026247ms]
May  1 01:09:29.892: INFO: Created: latency-svc-zvbbz
May  1 01:09:29.906: INFO: Got endpoints: latency-svc-zvbbz [822.583223ms]
May  1 01:09:29.946: INFO: Created: latency-svc-6tgkd
May  1 01:09:29.963: INFO: Got endpoints: latency-svc-6tgkd [803.656797ms]
May  1 01:09:30.027: INFO: Created: latency-svc-kzklh
May  1 01:09:30.071: INFO: Got endpoints: latency-svc-kzklh [856.664253ms]
May  1 01:09:30.108: INFO: Created: latency-svc-k4mp7
May  1 01:09:30.125: INFO: Got endpoints: latency-svc-k4mp7 [843.134978ms]
May  1 01:09:30.179: INFO: Created: latency-svc-pztqw
May  1 01:09:30.197: INFO: Got endpoints: latency-svc-pztqw [844.541011ms]
May  1 01:09:30.240: INFO: Created: latency-svc-m2rjk
May  1 01:09:30.258: INFO: Got endpoints: latency-svc-m2rjk [811.673254ms]
May  1 01:09:30.315: INFO: Created: latency-svc-xz9pb
May  1 01:09:30.318: INFO: Got endpoints: latency-svc-xz9pb [856.6703ms]
May  1 01:09:30.359: INFO: Created: latency-svc-w6fqq
May  1 01:09:30.366: INFO: Got endpoints: latency-svc-w6fqq [867.906305ms]
May  1 01:09:30.413: INFO: Created: latency-svc-v9m5p
May  1 01:09:30.482: INFO: Got endpoints: latency-svc-v9m5p [946.663853ms]
May  1 01:09:30.483: INFO: Created: latency-svc-h5wcg
May  1 01:09:30.493: INFO: Got endpoints: latency-svc-h5wcg [882.531034ms]
May  1 01:09:30.527: INFO: Created: latency-svc-4xplq
May  1 01:09:30.535: INFO: Got endpoints: latency-svc-4xplq [894.626001ms]
May  1 01:09:30.557: INFO: Created: latency-svc-9xkzq
May  1 01:09:30.571: INFO: Got endpoints: latency-svc-9xkzq [836.855927ms]
May  1 01:09:30.620: INFO: Created: latency-svc-8w2vl
May  1 01:09:30.653: INFO: Created: latency-svc-tktfh
May  1 01:09:30.653: INFO: Got endpoints: latency-svc-8w2vl [908.172875ms]
May  1 01:09:30.679: INFO: Got endpoints: latency-svc-tktfh [872.938666ms]
May  1 01:09:30.720: INFO: Created: latency-svc-47fxp
May  1 01:09:30.798: INFO: Got endpoints: latency-svc-47fxp [925.41559ms]
May  1 01:09:30.800: INFO: Created: latency-svc-ggm9b
May  1 01:09:30.812: INFO: Got endpoints: latency-svc-ggm9b [905.837233ms]
May  1 01:09:30.881: INFO: Created: latency-svc-dgd5j
May  1 01:09:30.955: INFO: Got endpoints: latency-svc-dgd5j [991.977395ms]
May  1 01:09:30.957: INFO: Created: latency-svc-xk9zc
May  1 01:09:30.968: INFO: Got endpoints: latency-svc-xk9zc [896.530995ms]
May  1 01:09:31.002: INFO: Created: latency-svc-q6lmj
May  1 01:09:31.016: INFO: Got endpoints: latency-svc-q6lmj [891.247768ms]
May  1 01:09:31.037: INFO: Created: latency-svc-tc69k
May  1 01:09:31.046: INFO: Got endpoints: latency-svc-tc69k [848.984432ms]
May  1 01:09:31.114: INFO: Created: latency-svc-v76hm
May  1 01:09:31.131: INFO: Got endpoints: latency-svc-v76hm [872.864227ms]
May  1 01:09:31.151: INFO: Created: latency-svc-9v926
May  1 01:09:31.168: INFO: Got endpoints: latency-svc-9v926 [849.536407ms]
May  1 01:09:31.188: INFO: Created: latency-svc-fbqx6
May  1 01:09:31.204: INFO: Got endpoints: latency-svc-fbqx6 [837.388147ms]
May  1 01:09:31.255: INFO: Created: latency-svc-7ctlg
May  1 01:09:31.259: INFO: Got endpoints: latency-svc-7ctlg [776.813122ms]
May  1 01:09:31.301: INFO: Created: latency-svc-b9xhv
May  1 01:09:31.319: INFO: Got endpoints: latency-svc-b9xhv [826.278481ms]
May  1 01:09:31.407: INFO: Created: latency-svc-2pggd
May  1 01:09:31.414: INFO: Got endpoints: latency-svc-2pggd [879.267298ms]
May  1 01:09:31.445: INFO: Created: latency-svc-9n4bw
May  1 01:09:31.463: INFO: Got endpoints: latency-svc-9n4bw [891.73082ms]
May  1 01:09:31.481: INFO: Created: latency-svc-vcx6w
May  1 01:09:31.572: INFO: Got endpoints: latency-svc-vcx6w [918.520363ms]
May  1 01:09:31.574: INFO: Created: latency-svc-grhpj
May  1 01:09:31.601: INFO: Got endpoints: latency-svc-grhpj [921.884411ms]
May  1 01:09:31.601: INFO: Created: latency-svc-299gb
May  1 01:09:31.625: INFO: Got endpoints: latency-svc-299gb [826.86194ms]
May  1 01:09:31.655: INFO: Created: latency-svc-qp28m
May  1 01:09:31.667: INFO: Got endpoints: latency-svc-qp28m [855.424699ms]
May  1 01:09:31.716: INFO: Created: latency-svc-kxn28
May  1 01:09:31.739: INFO: Created: latency-svc-gspnx
May  1 01:09:31.739: INFO: Got endpoints: latency-svc-kxn28 [784.072556ms]
May  1 01:09:31.763: INFO: Got endpoints: latency-svc-gspnx [795.134682ms]
May  1 01:09:31.793: INFO: Created: latency-svc-2xmc5
May  1 01:09:31.812: INFO: Got endpoints: latency-svc-2xmc5 [795.899858ms]
May  1 01:09:32.268: INFO: Created: latency-svc-c7ngs
May  1 01:09:32.316: INFO: Got endpoints: latency-svc-c7ngs [1.269751016s]
May  1 01:09:32.664: INFO: Created: latency-svc-fwswr
May  1 01:09:32.669: INFO: Got endpoints: latency-svc-fwswr [1.537650567s]
May  1 01:09:32.740: INFO: Created: latency-svc-j588h
May  1 01:09:32.811: INFO: Got endpoints: latency-svc-j588h [1.643391491s]
May  1 01:09:32.824: INFO: Created: latency-svc-lwg4h
May  1 01:09:32.843: INFO: Got endpoints: latency-svc-lwg4h [1.639353327s]
May  1 01:09:32.866: INFO: Created: latency-svc-gnbd5
May  1 01:09:32.879: INFO: Got endpoints: latency-svc-gnbd5 [1.620494212s]
May  1 01:09:32.901: INFO: Created: latency-svc-nn52x
May  1 01:09:32.949: INFO: Got endpoints: latency-svc-nn52x [1.630308248s]
May  1 01:09:32.968: INFO: Created: latency-svc-4txs7
May  1 01:09:32.982: INFO: Got endpoints: latency-svc-4txs7 [1.567848186s]
May  1 01:09:33.003: INFO: Created: latency-svc-djr6m
May  1 01:09:33.018: INFO: Got endpoints: latency-svc-djr6m [1.555428283s]
May  1 01:09:33.039: INFO: Created: latency-svc-vvcm4
May  1 01:09:33.048: INFO: Got endpoints: latency-svc-vvcm4 [1.476074375s]
May  1 01:09:33.105: INFO: Created: latency-svc-b5p7x
May  1 01:09:33.134: INFO: Got endpoints: latency-svc-b5p7x [1.532227763s]
May  1 01:09:33.159: INFO: Created: latency-svc-brdl5
May  1 01:09:33.175: INFO: Got endpoints: latency-svc-brdl5 [1.549733659s]
May  1 01:09:33.195: INFO: Created: latency-svc-24wsn
May  1 01:09:33.262: INFO: Got endpoints: latency-svc-24wsn [1.594416761s]
May  1 01:09:33.285: INFO: Created: latency-svc-hpb8s
May  1 01:09:33.295: INFO: Got endpoints: latency-svc-hpb8s [1.556008416s]
May  1 01:09:33.346: INFO: Created: latency-svc-fqwmg
May  1 01:09:33.410: INFO: Got endpoints: latency-svc-fqwmg [148.598124ms]
May  1 01:09:33.412: INFO: Created: latency-svc-6fmnc
May  1 01:09:33.442: INFO: Got endpoints: latency-svc-6fmnc [1.678526648s]
May  1 01:09:33.472: INFO: Created: latency-svc-bl5ds
May  1 01:09:33.482: INFO: Got endpoints: latency-svc-bl5ds [1.669744407s]
May  1 01:09:33.501: INFO: Created: latency-svc-n884c
May  1 01:09:33.578: INFO: Got endpoints: latency-svc-n884c [1.261954775s]
May  1 01:09:33.580: INFO: Created: latency-svc-mbdbs
May  1 01:09:33.603: INFO: Got endpoints: latency-svc-mbdbs [934.894472ms]
May  1 01:09:33.633: INFO: Created: latency-svc-b2ptp
May  1 01:09:33.651: INFO: Got endpoints: latency-svc-b2ptp [840.288683ms]
May  1 01:09:33.670: INFO: Created: latency-svc-cw8cb
May  1 01:09:33.722: INFO: Got endpoints: latency-svc-cw8cb [878.460913ms]
May  1 01:09:33.735: INFO: Created: latency-svc-5qzsc
May  1 01:09:33.766: INFO: Got endpoints: latency-svc-5qzsc [886.202369ms]
May  1 01:09:33.789: INFO: Created: latency-svc-kqm7d
May  1 01:09:33.802: INFO: Got endpoints: latency-svc-kqm7d [852.485342ms]
May  1 01:09:33.877: INFO: Created: latency-svc-4vkm2
May  1 01:09:33.891: INFO: Got endpoints: latency-svc-4vkm2 [909.162449ms]
May  1 01:09:33.939: INFO: Created: latency-svc-mr84h
May  1 01:09:33.958: INFO: Got endpoints: latency-svc-mr84h [940.237465ms]
May  1 01:09:34.034: INFO: Created: latency-svc-hjqq6
May  1 01:09:34.043: INFO: Got endpoints: latency-svc-hjqq6 [995.23103ms]
May  1 01:09:34.065: INFO: Created: latency-svc-qg2xc
May  1 01:09:34.089: INFO: Got endpoints: latency-svc-qg2xc [955.749542ms]
May  1 01:09:34.124: INFO: Created: latency-svc-t9krq
May  1 01:09:34.189: INFO: Got endpoints: latency-svc-t9krq [1.01381162s]
May  1 01:09:34.203: INFO: Created: latency-svc-6mv6s
May  1 01:09:34.218: INFO: Got endpoints: latency-svc-6mv6s [922.172379ms]
May  1 01:09:34.239: INFO: Created: latency-svc-n8zsj
May  1 01:09:34.269: INFO: Got endpoints: latency-svc-n8zsj [858.854583ms]
May  1 01:09:34.350: INFO: Created: latency-svc-jccxc
May  1 01:09:34.368: INFO: Got endpoints: latency-svc-jccxc [925.992859ms]
May  1 01:09:34.413: INFO: Created: latency-svc-zhv9k
May  1 01:09:34.536: INFO: Got endpoints: latency-svc-zhv9k [1.05407958s]
May  1 01:09:34.538: INFO: Created: latency-svc-tbknp
May  1 01:09:34.548: INFO: Got endpoints: latency-svc-tbknp [969.467595ms]
May  1 01:09:34.569: INFO: Created: latency-svc-74vdm
May  1 01:09:34.585: INFO: Got endpoints: latency-svc-74vdm [981.336806ms]
May  1 01:09:34.611: INFO: Created: latency-svc-76rhq
May  1 01:09:34.679: INFO: Got endpoints: latency-svc-76rhq [1.028054433s]
May  1 01:09:34.750: INFO: Created: latency-svc-jz2wc
May  1 01:09:34.759: INFO: Got endpoints: latency-svc-jz2wc [1.037197115s]
May  1 01:09:34.824: INFO: Created: latency-svc-vcq97
May  1 01:09:34.837: INFO: Got endpoints: latency-svc-vcq97 [1.071393899s]
May  1 01:09:34.863: INFO: Created: latency-svc-246p2
May  1 01:09:34.893: INFO: Got endpoints: latency-svc-246p2 [1.091383861s]
May  1 01:09:34.917: INFO: Created: latency-svc-c42mj
May  1 01:09:34.961: INFO: Got endpoints: latency-svc-c42mj [1.06984911s]
May  1 01:09:34.965: INFO: Created: latency-svc-q6fth
May  1 01:09:34.982: INFO: Got endpoints: latency-svc-q6fth [1.0231121s]
May  1 01:09:35.001: INFO: Created: latency-svc-df8l9
May  1 01:09:35.029: INFO: Got endpoints: latency-svc-df8l9 [985.704062ms]
May  1 01:09:35.056: INFO: Created: latency-svc-4knxn
May  1 01:09:35.117: INFO: Got endpoints: latency-svc-4knxn [1.027480368s]
May  1 01:09:35.140: INFO: Created: latency-svc-x268f
May  1 01:09:35.157: INFO: Got endpoints: latency-svc-x268f [968.566154ms]
May  1 01:09:35.175: INFO: Created: latency-svc-qkvhz
May  1 01:09:35.200: INFO: Got endpoints: latency-svc-qkvhz [981.994396ms]
May  1 01:09:35.271: INFO: Created: latency-svc-9mxhp
May  1 01:09:35.295: INFO: Got endpoints: latency-svc-9mxhp [1.025612946s]
May  1 01:09:35.296: INFO: Created: latency-svc-gxs8p
May  1 01:09:35.326: INFO: Got endpoints: latency-svc-gxs8p [958.117931ms]
May  1 01:09:35.355: INFO: Created: latency-svc-r2fgx
May  1 01:09:35.419: INFO: Got endpoints: latency-svc-r2fgx [882.681238ms]
May  1 01:09:35.427: INFO: Created: latency-svc-mx7hd
May  1 01:09:35.440: INFO: Got endpoints: latency-svc-mx7hd [892.181998ms]
May  1 01:09:35.463: INFO: Created: latency-svc-68585
May  1 01:09:35.487: INFO: Got endpoints: latency-svc-68585 [901.996238ms]
May  1 01:09:35.560: INFO: Created: latency-svc-f727r
May  1 01:09:35.589: INFO: Got endpoints: latency-svc-f727r [909.457018ms]
May  1 01:09:35.589: INFO: Created: latency-svc-2npkz
May  1 01:09:35.603: INFO: Got endpoints: latency-svc-2npkz [844.244503ms]
May  1 01:09:35.625: INFO: Created: latency-svc-sc42s
May  1 01:09:35.655: INFO: Got endpoints: latency-svc-sc42s [817.702278ms]
May  1 01:09:35.710: INFO: Created: latency-svc-g2pvh
May  1 01:09:35.712: INFO: Got endpoints: latency-svc-g2pvh [819.172943ms]
May  1 01:09:35.746: INFO: Created: latency-svc-lfxxj
May  1 01:09:35.760: INFO: Got endpoints: latency-svc-lfxxj [798.594853ms]
May  1 01:09:35.781: INFO: Created: latency-svc-lmwn2
May  1 01:09:35.799: INFO: Got endpoints: latency-svc-lmwn2 [817.636267ms]
May  1 01:09:35.853: INFO: Created: latency-svc-zms4j
May  1 01:09:35.895: INFO: Got endpoints: latency-svc-zms4j [865.506994ms]
May  1 01:09:35.896: INFO: Created: latency-svc-dcqjv
May  1 01:09:35.923: INFO: Got endpoints: latency-svc-dcqjv [805.76691ms]
May  1 01:09:35.923: INFO: Latencies: [74.396807ms 148.598124ms 159.533096ms 170.627798ms 309.475258ms 326.579029ms 386.866111ms 488.598073ms 500.536295ms 543.574565ms 585.041888ms 645.567963ms 681.550836ms 718.467785ms 753.509201ms 776.813122ms 781.978156ms 782.22578ms 782.656785ms 784.072556ms 795.134682ms 795.899858ms 798.594853ms 803.656797ms 805.76691ms 807.072419ms 811.673254ms 814.321097ms 815.027524ms 817.636267ms 817.702278ms 819.172943ms 822.583223ms 826.278481ms 826.86194ms 828.238698ms 836.855927ms 837.388147ms 839.026247ms 840.288683ms 841.183482ms 843.134978ms 844.244503ms 844.541011ms 844.624448ms 848.984432ms 849.536407ms 850.178293ms 852.485342ms 855.424699ms 856.664253ms 856.6703ms 858.854583ms 859.514724ms 860.322267ms 865.506994ms 867.125325ms 867.906305ms 872.864227ms 872.938666ms 878.460913ms 879.267298ms 880.473541ms 882.531034ms 882.681238ms 886.202369ms 891.247768ms 891.73082ms 892.181998ms 894.268042ms 894.626001ms 895.802186ms 896.530995ms 897.296213ms 898.196925ms 901.996238ms 902.832885ms 905.837233ms 907.063328ms 907.162006ms 908.172875ms 909.162449ms 909.457018ms 914.676074ms 918.520363ms 919.565634ms 921.10128ms 921.884411ms 922.172379ms 925.41559ms 925.992859ms 932.636332ms 934.794518ms 934.894472ms 935.784192ms 940.237465ms 940.312843ms 940.953957ms 946.663853ms 948.839607ms 953.442378ms 955.581526ms 955.749542ms 958.117931ms 958.8239ms 966.738435ms 967.802424ms 968.38694ms 968.566154ms 969.467595ms 969.588377ms 973.883313ms 975.562568ms 977.430452ms 981.336806ms 981.994396ms 985.704062ms 985.841037ms 991.977395ms 995.23103ms 1.01381162s 1.0231121s 1.025612946s 1.027480368s 1.028054433s 1.031279759s 1.031303572s 1.032211662s 1.034089009s 1.035977987s 1.03636387s 1.037197115s 1.038008099s 1.040491774s 1.041015113s 1.042468078s 1.05407958s 1.059843187s 1.061549789s 1.066302177s 1.06984911s 1.071393899s 1.091383861s 1.100507456s 1.106768477s 1.109233254s 1.110358604s 1.118950945s 1.134939928s 1.137760673s 1.141184842s 1.156807849s 1.164021043s 1.16678603s 1.167038717s 1.172951269s 1.178884967s 1.179311917s 1.181321763s 1.192453572s 1.193488431s 1.199323733s 1.201911597s 1.208258741s 1.21976532s 1.220779669s 1.223924626s 1.224149931s 1.225385563s 1.227911948s 1.261954775s 1.269751016s 1.277863799s 1.281500941s 1.292777051s 1.301450949s 1.306337534s 1.3100594s 1.310311767s 1.310501126s 1.317176386s 1.321466986s 1.328345955s 1.344313856s 1.352100305s 1.360667254s 1.476074375s 1.532227763s 1.537650567s 1.549733659s 1.555428283s 1.556008416s 1.567848186s 1.594416761s 1.620494212s 1.630308248s 1.639353327s 1.643391491s 1.669744407s 1.678526648s]
May  1 01:09:35.923: INFO: 50 %ile: 953.442378ms
May  1 01:09:35.923: INFO: 90 %ile: 1.317176386s
May  1 01:09:35.923: INFO: 99 %ile: 1.669744407s
May  1 01:09:35.923: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:09:35.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-7378" for this suite.

• [SLOW TEST:18.060 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":290,"completed":253,"skipped":4311,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:09:36.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:09:52.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5622" for this suite.

• [SLOW TEST:16.571 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":290,"completed":254,"skipped":4340,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:09:52.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: running the image docker.io/library/httpd:2.4.38-alpine
May  1 01:09:52.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6590'
May  1 01:09:52.925: INFO: stderr: ""
May  1 01:09:52.926: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528
May  1 01:09:52.971: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6590'
May  1 01:10:04.932: INFO: stderr: ""
May  1 01:10:04.932: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:10:04.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6590" for this suite.

• [SLOW TEST:12.325 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":290,"completed":255,"skipped":4345,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:10:04.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  1 01:10:05.795: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  1 01:10:07.899: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892205, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892205, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892205, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892205, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  1 01:10:10.956: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:10:10.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5587-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:10:12.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6974" for this suite.
STEP: Destroying namespace "webhook-6974-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.256 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":290,"completed":256,"skipped":4345,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:10:12.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test override command
May  1 01:10:12.260: INFO: Waiting up to 5m0s for pod "client-containers-4fbc2ef0-3aff-4a1f-af50-bed2d5273a6b" in namespace "containers-8588" to be "Succeeded or Failed"
May  1 01:10:12.303: INFO: Pod "client-containers-4fbc2ef0-3aff-4a1f-af50-bed2d5273a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 42.299981ms
May  1 01:10:14.543: INFO: Pod "client-containers-4fbc2ef0-3aff-4a1f-af50-bed2d5273a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28245482s
May  1 01:10:16.547: INFO: Pod "client-containers-4fbc2ef0-3aff-4a1f-af50-bed2d5273a6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.286789182s
STEP: Saw pod success
May  1 01:10:16.547: INFO: Pod "client-containers-4fbc2ef0-3aff-4a1f-af50-bed2d5273a6b" satisfied condition "Succeeded or Failed"
May  1 01:10:16.550: INFO: Trying to get logs from node latest-worker2 pod client-containers-4fbc2ef0-3aff-4a1f-af50-bed2d5273a6b container test-container: 
STEP: delete the pod
May  1 01:10:16.589: INFO: Waiting for pod client-containers-4fbc2ef0-3aff-4a1f-af50-bed2d5273a6b to disappear
May  1 01:10:16.644: INFO: Pod client-containers-4fbc2ef0-3aff-4a1f-af50-bed2d5273a6b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:10:16.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8588" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":290,"completed":257,"skipped":4361,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:10:16.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating service in namespace services-7801
STEP: creating service affinity-nodeport in namespace services-7801
STEP: creating replication controller affinity-nodeport in namespace services-7801
I0501 01:10:16.840809       7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-7801, replica count: 3
I0501 01:10:19.891201       7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0501 01:10:22.891467       7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May  1 01:10:22.903: INFO: Creating new exec pod
May  1 01:10:28.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7801 execpod-affinityddsgj -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80'
May  1 01:10:28.255: INFO: stderr: "I0501 01:10:28.167583    3609 log.go:172] (0xc000bcd340) (0xc000c18140) Create stream\nI0501 01:10:28.167647    3609 log.go:172] (0xc000bcd340) (0xc000c18140) Stream added, broadcasting: 1\nI0501 01:10:28.173764    3609 log.go:172] (0xc000bcd340) Reply frame received for 1\nI0501 01:10:28.173809    3609 log.go:172] (0xc000bcd340) (0xc000730fa0) Create stream\nI0501 01:10:28.173821    3609 log.go:172] (0xc000bcd340) (0xc000730fa0) Stream added, broadcasting: 3\nI0501 01:10:28.174819    3609 log.go:172] (0xc000bcd340) Reply frame received for 3\nI0501 01:10:28.174863    3609 log.go:172] (0xc000bcd340) (0xc0006e4b40) Create stream\nI0501 01:10:28.174878    3609 log.go:172] (0xc000bcd340) (0xc0006e4b40) Stream added, broadcasting: 5\nI0501 01:10:28.175746    3609 log.go:172] (0xc000bcd340) Reply frame received for 5\nI0501 01:10:28.245853    3609 log.go:172] (0xc000bcd340) Data frame received for 5\nI0501 01:10:28.245886    3609 log.go:172] (0xc0006e4b40) (5) Data frame handling\nI0501 01:10:28.245908    3609 log.go:172] (0xc0006e4b40) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0501 01:10:28.246537    3609 log.go:172] (0xc000bcd340) Data frame received for 5\nI0501 01:10:28.246569    3609 log.go:172] (0xc0006e4b40) (5) Data frame handling\nI0501 01:10:28.246602    3609 log.go:172] (0xc0006e4b40) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0501 01:10:28.246671    3609 log.go:172] (0xc000bcd340) Data frame received for 3\nI0501 01:10:28.246696    3609 log.go:172] (0xc000730fa0) (3) Data frame handling\nI0501 01:10:28.246918    3609 log.go:172] (0xc000bcd340) Data frame received for 5\nI0501 01:10:28.246936    3609 log.go:172] (0xc0006e4b40) (5) Data frame handling\nI0501 01:10:28.249094    3609 log.go:172] (0xc000bcd340) Data frame received for 1\nI0501 01:10:28.249290    3609 log.go:172] (0xc000c18140) (1) Data frame handling\nI0501 01:10:28.249328    3609 log.go:172] (0xc000c18140) (1) Data frame sent\nI0501 01:10:28.249356    3609 log.go:172] (0xc000bcd340) (0xc000c18140) Stream removed, broadcasting: 1\nI0501 01:10:28.249440    3609 log.go:172] (0xc000bcd340) Go away received\nI0501 01:10:28.249828    3609 log.go:172] (0xc000bcd340) (0xc000c18140) Stream removed, broadcasting: 1\nI0501 01:10:28.249856    3609 log.go:172] (0xc000bcd340) (0xc000730fa0) Stream removed, broadcasting: 3\nI0501 01:10:28.249872    3609 log.go:172] (0xc000bcd340) (0xc0006e4b40) Stream removed, broadcasting: 5\n"
May  1 01:10:28.255: INFO: stdout: ""
May  1 01:10:28.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7801 execpod-affinityddsgj -- /bin/sh -x -c nc -zv -t -w 2 10.97.166.116 80'
May  1 01:10:28.472: INFO: stderr: "I0501 01:10:28.390406    3631 log.go:172] (0xc000b094a0) (0xc000a74500) Create stream\nI0501 01:10:28.390460    3631 log.go:172] (0xc000b094a0) (0xc000a74500) Stream added, broadcasting: 1\nI0501 01:10:28.395204    3631 log.go:172] (0xc000b094a0) Reply frame received for 1\nI0501 01:10:28.395257    3631 log.go:172] (0xc000b094a0) (0xc00066e500) Create stream\nI0501 01:10:28.395289    3631 log.go:172] (0xc000b094a0) (0xc00066e500) Stream added, broadcasting: 3\nI0501 01:10:28.396164    3631 log.go:172] (0xc000b094a0) Reply frame received for 3\nI0501 01:10:28.396199    3631 log.go:172] (0xc000b094a0) (0xc0005f9b80) Create stream\nI0501 01:10:28.396210    3631 log.go:172] (0xc000b094a0) (0xc0005f9b80) Stream added, broadcasting: 5\nI0501 01:10:28.397051    3631 log.go:172] (0xc000b094a0) Reply frame received for 5\nI0501 01:10:28.465619    3631 log.go:172] (0xc000b094a0) Data frame received for 5\nI0501 01:10:28.465646    3631 log.go:172] (0xc0005f9b80) (5) Data frame handling\nI0501 01:10:28.465658    3631 log.go:172] (0xc0005f9b80) (5) Data frame sent\n+ nc -zv -t -w 2 10.97.166.116 80\nConnection to 10.97.166.116 80 port [tcp/http] succeeded!\nI0501 01:10:28.465916    3631 log.go:172] (0xc000b094a0) Data frame received for 3\nI0501 01:10:28.465940    3631 log.go:172] (0xc00066e500) (3) Data frame handling\nI0501 01:10:28.465979    3631 log.go:172] (0xc000b094a0) Data frame received for 5\nI0501 01:10:28.466010    3631 log.go:172] (0xc0005f9b80) (5) Data frame handling\nI0501 01:10:28.466864    3631 log.go:172] (0xc000b094a0) Data frame received for 1\nI0501 01:10:28.466885    3631 log.go:172] (0xc000a74500) (1) Data frame handling\nI0501 01:10:28.466897    3631 log.go:172] (0xc000a74500) (1) Data frame sent\nI0501 01:10:28.467028    3631 log.go:172] (0xc000b094a0) (0xc000a74500) Stream removed, broadcasting: 1\nI0501 01:10:28.467059    3631 log.go:172] (0xc000b094a0) Go away received\nI0501 01:10:28.467465    3631 log.go:172] (0xc000b094a0) (0xc000a74500) Stream removed, broadcasting: 1\nI0501 01:10:28.467488    3631 log.go:172] (0xc000b094a0) (0xc00066e500) Stream removed, broadcasting: 3\nI0501 01:10:28.467499    3631 log.go:172] (0xc000b094a0) (0xc0005f9b80) Stream removed, broadcasting: 5\n"
May  1 01:10:28.472: INFO: stdout: ""
May  1 01:10:28.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7801 execpod-affinityddsgj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30818'
May  1 01:10:28.667: INFO: stderr: "I0501 01:10:28.604764    3653 log.go:172] (0xc000028000) (0xc0004e4280) Create stream\nI0501 01:10:28.604828    3653 log.go:172] (0xc000028000) (0xc0004e4280) Stream added, broadcasting: 1\nI0501 01:10:28.608458    3653 log.go:172] (0xc000028000) Reply frame received for 1\nI0501 01:10:28.608512    3653 log.go:172] (0xc000028000) (0xc0004e5220) Create stream\nI0501 01:10:28.608527    3653 log.go:172] (0xc000028000) (0xc0004e5220) Stream added, broadcasting: 3\nI0501 01:10:28.609673    3653 log.go:172] (0xc000028000) Reply frame received for 3\nI0501 01:10:28.609703    3653 log.go:172] (0xc000028000) (0xc00052d9a0) Create stream\nI0501 01:10:28.609712    3653 log.go:172] (0xc000028000) (0xc00052d9a0) Stream added, broadcasting: 5\nI0501 01:10:28.610985    3653 log.go:172] (0xc000028000) Reply frame received for 5\nI0501 01:10:28.661220    3653 log.go:172] (0xc000028000) Data frame received for 5\nI0501 01:10:28.661274    3653 log.go:172] (0xc00052d9a0) (5) Data frame handling\nI0501 01:10:28.661281    3653 log.go:172] (0xc00052d9a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 30818\nConnection to 172.17.0.13 30818 port [tcp/30818] succeeded!\nI0501 01:10:28.661302    3653 log.go:172] (0xc000028000) Data frame received for 3\nI0501 01:10:28.661341    3653 log.go:172] (0xc0004e5220) (3) Data frame handling\nI0501 01:10:28.661375    3653 log.go:172] (0xc000028000) Data frame received for 5\nI0501 01:10:28.661402    3653 log.go:172] (0xc00052d9a0) (5) Data frame handling\nI0501 01:10:28.662354    3653 log.go:172] (0xc000028000) Data frame received for 1\nI0501 01:10:28.662370    3653 log.go:172] (0xc0004e4280) (1) Data frame handling\nI0501 01:10:28.662377    3653 log.go:172] (0xc0004e4280) (1) Data frame sent\nI0501 01:10:28.662388    3653 log.go:172] (0xc000028000) (0xc0004e4280) Stream removed, broadcasting: 1\nI0501 01:10:28.662438    3653 log.go:172] (0xc000028000) Go away received\nI0501 01:10:28.662623    3653 log.go:172] (0xc000028000) (0xc0004e4280) Stream removed, broadcasting: 1\nI0501 01:10:28.662635    3653 log.go:172] (0xc000028000) (0xc0004e5220) Stream removed, broadcasting: 3\nI0501 01:10:28.662641    3653 log.go:172] (0xc000028000) (0xc00052d9a0) Stream removed, broadcasting: 5\n"
May  1 01:10:28.667: INFO: stdout: ""
May  1 01:10:28.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7801 execpod-affinityddsgj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30818'
May  1 01:10:28.899: INFO: stderr: "I0501 01:10:28.816968    3673 log.go:172] (0xc000a01290) (0xc0009e83c0) Create stream\nI0501 01:10:28.817031    3673 log.go:172] (0xc000a01290) (0xc0009e83c0) Stream added, broadcasting: 1\nI0501 01:10:28.825437    3673 log.go:172] (0xc000a01290) Reply frame received for 1\nI0501 01:10:28.825505    3673 log.go:172] (0xc000a01290) (0xc0006f2000) Create stream\nI0501 01:10:28.825522    3673 log.go:172] (0xc000a01290) (0xc0006f2000) Stream added, broadcasting: 3\nI0501 01:10:28.827409    3673 log.go:172] (0xc000a01290) Reply frame received for 3\nI0501 01:10:28.827447    3673 log.go:172] (0xc000a01290) (0xc000670640) Create stream\nI0501 01:10:28.827460    3673 log.go:172] (0xc000a01290) (0xc000670640) Stream added, broadcasting: 5\nI0501 01:10:28.828280    3673 log.go:172] (0xc000a01290) Reply frame received for 5\nI0501 01:10:28.890851    3673 log.go:172] (0xc000a01290) Data frame received for 5\nI0501 01:10:28.890887    3673 log.go:172] (0xc000670640) (5) Data frame handling\nI0501 01:10:28.890917    3673 log.go:172] (0xc000670640) (5) Data frame sent\nI0501 01:10:28.890946    3673 log.go:172] (0xc000a01290) Data frame received for 5\nI0501 01:10:28.890964    3673 log.go:172] (0xc000670640) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30818\nConnection to 172.17.0.12 30818 port [tcp/30818] succeeded!\nI0501 01:10:28.891047    3673 log.go:172] (0xc000a01290) Data frame received for 3\nI0501 01:10:28.891081    3673 log.go:172] (0xc0006f2000) (3) Data frame handling\nI0501 01:10:28.892879    3673 log.go:172] (0xc000a01290) Data frame received for 1\nI0501 01:10:28.892903    3673 log.go:172] (0xc0009e83c0) (1) Data frame handling\nI0501 01:10:28.892921    3673 log.go:172] (0xc0009e83c0) (1) Data frame sent\nI0501 01:10:28.892936    3673 log.go:172] (0xc000a01290) (0xc0009e83c0) Stream removed, broadcasting: 1\nI0501 01:10:28.892965    3673 log.go:172] (0xc000a01290) Go away received\nI0501 01:10:28.893576    3673 log.go:172] (0xc000a01290) (0xc0009e83c0) Stream removed, broadcasting: 1\nI0501 01:10:28.893614    3673 log.go:172] (0xc000a01290) (0xc0006f2000) Stream removed, broadcasting: 3\nI0501 01:10:28.893633    3673 log.go:172] (0xc000a01290) (0xc000670640) Stream removed, broadcasting: 5\n"
May  1 01:10:28.899: INFO: stdout: ""
May  1 01:10:28.899: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7801 execpod-affinityddsgj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30818/ ; done'
May  1 01:10:29.182: INFO: stderr: "I0501 01:10:29.032425    3693 log.go:172] (0xc000989a20) (0xc000bca640) Create stream\nI0501 01:10:29.032495    3693 log.go:172] (0xc000989a20) (0xc000bca640) Stream added, broadcasting: 1\nI0501 01:10:29.036846    3693 log.go:172] (0xc000989a20) Reply frame received for 1\nI0501 01:10:29.036891    3693 log.go:172] (0xc000989a20) (0xc00070c5a0) Create stream\nI0501 01:10:29.036901    3693 log.go:172] (0xc000989a20) (0xc00070c5a0) Stream added, broadcasting: 3\nI0501 01:10:29.038016    3693 log.go:172] (0xc000989a20) Reply frame received for 3\nI0501 01:10:29.038069    3693 log.go:172] (0xc000989a20) (0xc000840f00) Create stream\nI0501 01:10:29.038086    3693 log.go:172] (0xc000989a20) (0xc000840f00) Stream added, broadcasting: 5\nI0501 01:10:29.039239    3693 log.go:172] (0xc000989a20) Reply frame received for 5\nI0501 01:10:29.089546    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.089583    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.089599    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.089627    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.089639    3693 log.go:172] (0xc000840f00) (5) Data frame handling\nI0501 01:10:29.089651    3693 log.go:172] (0xc000840f00) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30818/\nI0501 01:10:29.098042    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.098069    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.098096    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.099163    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.099189    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.099209    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.099241    3693 log.go:172] (0xc000840f00) (5) Data frame handling\nI0501 01:10:29.099255    3693 log.go:172] (0xc000840f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30818/\nI0501 01:10:29.099277    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.104262    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.104289    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.104312    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.104558    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.104587    3693 log.go:172] (0xc000840f00) (5) Data frame handling\nI0501 01:10:29.104608    3693 log.go:172] (0xc000840f00) (5) Data frame sent\nI0501 01:10:29.104629    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.104641    3693 log.go:172] (0xc000840f00) (5) Data frame handling\nI0501 01:10:29.104653    3693 log.go:172] (0xc000989a20) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30818/\nI0501 01:10:29.104663    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.104722    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.104743    3693 log.go:172] (0xc000840f00) (5) Data frame sent\nI0501 01:10:29.111405    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.111435    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.111454    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.112095    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.112145    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.112165    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.112189    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.112209    3693 log.go:172] (0xc000840f00) (5) Data frame handling\nI0501 01:10:29.112238    3693 log.go:172] (0xc000840f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30818/\nI0501 01:10:29.116606    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.116631    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.116662    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.116858    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.116882    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.116912    3693 log.go:172] (0xc000840f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30818/\nI0501 01:10:29.116928    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.116951    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.116967    3693 log.go:172] (0xc000840f00) (5) Data frame sent\nI0501 01:10:29.120732    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.120747    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.120754    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.121508    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.121539    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.121565    3693 log.go:172] (0xc000840f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30818/\nI0501 01:10:29.121581    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.121607    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.121624    3693 log.go:172] (0xc000840f00) (5) Data frame sent\nI0501 01:10:29.124798    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.124818    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.124836    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.125051    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.125064    3693 log.go:172] (0xc000840f00) (5) Data frame handling\nI0501 01:10:29.125079    3693 log.go:172] (0xc000840f00) (5) Data frame sent\nI0501 01:10:29.125085    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.125089    3693 log.go:172] (0xc000840f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30818/\nI0501 01:10:29.125100    3693 log.go:172] (0xc000840f00) (5) Data frame sent\nI0501 01:10:29.125621    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.125640    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.125664    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.130758    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.130777    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.130791    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.131598    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.131615    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.131626    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.131639    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.131649    3693 log.go:172] (0xc000840f00) (5) Data frame handling\nI0501 01:10:29.131659    3693 log.go:172] (0xc000840f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30818/\nI0501 01:10:29.134718    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.134750    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.134791    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.134969    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.134981    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.134988    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.135001    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.135019    3693 log.go:172] (0xc000840f00) (5) Data frame handling\nI0501 01:10:29.135035    3693 log.go:172] (0xc000840f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30818/\nI0501 01:10:29.139340    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.139374    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.139406    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.140794    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.140814    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.140824    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.141590    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.141611    3693 log.go:172] (0xc000840f00) (5) Data frame handling\nI0501 01:10:29.141622    3693 log.go:172] (0xc000840f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30818/\nI0501 01:10:29.148054    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.148079    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.148100    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.148484    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.148502    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.148515    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.148526    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.148545    3693 log.go:172] (0xc000840f00) (5) Data frame handling\nI0501 01:10:29.148570    3693 log.go:172] (0xc000840f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30818/\nI0501 01:10:29.152702    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.152722    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.152743    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.153327    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.153349    3693 log.go:172] (0xc000840f00) (5) Data frame handling\nI0501 01:10:29.153361    3693 log.go:172] (0xc000840f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30818/\nI0501 01:10:29.153575    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.153588    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.153599    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.156649    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.156679    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.156708    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.156973    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.157019    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.157031    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.157043    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.157050    3693 log.go:172] (0xc000840f00) (5) Data frame handling\nI0501 01:10:29.157056    3693 log.go:172] (0xc000840f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30818/\nI0501 01:10:29.160851    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.160869    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.160881    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.161605    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.161632    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.161651    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.161681    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.161692    3693 log.go:172] (0xc000840f00) (5) Data frame handling\nI0501 01:10:29.161719    3693 log.go:172] (0xc000840f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30818/\nI0501 01:10:29.165498    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.165517    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.165538    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.165816    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.165839    3693 log.go:172] (0xc000840f00) (5) Data frame handling\nI0501 01:10:29.165849    3693 log.go:172] (0xc000840f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30818/\nI0501 01:10:29.165867    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.165882    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.165893    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.169876    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.169892    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.169907    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.170415    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.170429    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.170438    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.170452    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.170469    3693 log.go:172] (0xc000840f00) (5) Data frame handling\nI0501 01:10:29.170484    3693 log.go:172] (0xc000840f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30818/\nI0501 01:10:29.173806    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.173820    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.173827    3693 log.go:172] (0xc00070c5a0) (3) Data frame sent\nI0501 01:10:29.174370    3693 log.go:172] (0xc000989a20) Data frame received for 3\nI0501 01:10:29.174384    3693 log.go:172] (0xc00070c5a0) (3) Data frame handling\nI0501 01:10:29.174499    3693 log.go:172] (0xc000989a20) Data frame received for 5\nI0501 01:10:29.174513    3693 log.go:172] (0xc000840f00) (5) Data frame handling\nI0501 01:10:29.176137    3693 log.go:172] (0xc000989a20) Data frame received for 1\nI0501 01:10:29.176159    3693 log.go:172] (0xc000bca640) (1) Data frame handling\nI0501 01:10:29.176177    3693 log.go:172] (0xc000bca640) (1) Data frame sent\nI0501 01:10:29.176204    3693 log.go:172] (0xc000989a20) (0xc000bca640) Stream removed, broadcasting: 1\nI0501 01:10:29.176231    3693 log.go:172] (0xc000989a20) Go away received\nI0501 01:10:29.176633    3693 log.go:172] (0xc000989a20) (0xc000bca640) Stream removed, broadcasting: 1\nI0501 01:10:29.176651    3693 log.go:172] (0xc000989a20) (0xc00070c5a0) Stream removed, broadcasting: 3\nI0501 01:10:29.176661    3693 log.go:172] (0xc000989a20) (0xc000840f00) Stream removed, broadcasting: 5\n"
May  1 01:10:29.182: INFO: stdout: "\naffinity-nodeport-8fwnw\naffinity-nodeport-8fwnw\naffinity-nodeport-8fwnw\naffinity-nodeport-8fwnw\naffinity-nodeport-8fwnw\naffinity-nodeport-8fwnw\naffinity-nodeport-8fwnw\naffinity-nodeport-8fwnw\naffinity-nodeport-8fwnw\naffinity-nodeport-8fwnw\naffinity-nodeport-8fwnw\naffinity-nodeport-8fwnw\naffinity-nodeport-8fwnw\naffinity-nodeport-8fwnw\naffinity-nodeport-8fwnw\naffinity-nodeport-8fwnw"
May  1 01:10:29.182: INFO: Received response from host: 
May  1 01:10:29.183: INFO: Received response from host: affinity-nodeport-8fwnw
May  1 01:10:29.183: INFO: Received response from host: affinity-nodeport-8fwnw
May  1 01:10:29.183: INFO: Received response from host: affinity-nodeport-8fwnw
May  1 01:10:29.183: INFO: Received response from host: affinity-nodeport-8fwnw
May  1 01:10:29.183: INFO: Received response from host: affinity-nodeport-8fwnw
May  1 01:10:29.183: INFO: Received response from host: affinity-nodeport-8fwnw
May  1 01:10:29.183: INFO: Received response from host: affinity-nodeport-8fwnw
May  1 01:10:29.183: INFO: Received response from host: affinity-nodeport-8fwnw
May  1 01:10:29.183: INFO: Received response from host: affinity-nodeport-8fwnw
May  1 01:10:29.183: INFO: Received response from host: affinity-nodeport-8fwnw
May  1 01:10:29.183: INFO: Received response from host: affinity-nodeport-8fwnw
May  1 01:10:29.183: INFO: Received response from host: affinity-nodeport-8fwnw
May  1 01:10:29.183: INFO: Received response from host: affinity-nodeport-8fwnw
May  1 01:10:29.183: INFO: Received response from host: affinity-nodeport-8fwnw
May  1 01:10:29.183: INFO: Received response from host: affinity-nodeport-8fwnw
May  1 01:10:29.183: INFO: Received response from host: affinity-nodeport-8fwnw
May  1 01:10:29.183: INFO: Cleaning up the exec pod
STEP: deleting ReplicationController affinity-nodeport in namespace services-7801, will wait for the garbage collector to delete the pods
May  1 01:10:29.293: INFO: Deleting ReplicationController affinity-nodeport took: 7.518618ms
May  1 01:10:29.593: INFO: Terminating ReplicationController affinity-nodeport pods took: 300.232475ms
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:10:45.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7801" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:28.702 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":290,"completed":258,"skipped":4383,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:10:45.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:10:45.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May  1 01:10:47.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6752 create -f -'
May  1 01:10:50.912: INFO: stderr: ""
May  1 01:10:50.912: INFO: stdout: "e2e-test-crd-publish-openapi-1357-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
May  1 01:10:50.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6752 delete e2e-test-crd-publish-openapi-1357-crds test-cr'
May  1 01:10:51.108: INFO: stderr: ""
May  1 01:10:51.108: INFO: stdout: "e2e-test-crd-publish-openapi-1357-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
May  1 01:10:51.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6752 apply -f -'
May  1 01:10:51.515: INFO: stderr: ""
May  1 01:10:51.515: INFO: stdout: "e2e-test-crd-publish-openapi-1357-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
May  1 01:10:51.515: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6752 delete e2e-test-crd-publish-openapi-1357-crds test-cr'
May  1 01:10:51.635: INFO: stderr: ""
May  1 01:10:51.635: INFO: stdout: "e2e-test-crd-publish-openapi-1357-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
May  1 01:10:51.635: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1357-crds'
May  1 01:10:51.873: INFO: stderr: ""
May  1 01:10:51.873: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1357-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:10:53.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6752" for this suite.

• [SLOW TEST:8.432 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":290,"completed":259,"skipped":4396,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
  runs ReplicaSets to verify preemption running path [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:10:53.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-preemption
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80
May  1 01:10:53.966: INFO: Waiting up to 1m0s for all nodes to be ready
May  1 01:11:53.990: INFO: Waiting for terminating namespaces to be deleted...
[BeforeEach] PreemptionExecutionPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:11:53.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-preemption-path
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] PreemptionExecutionPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467
STEP: Finding an available node
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
May  1 01:11:58.174: INFO: found a healthy node: latest-worker2
[It] runs ReplicaSets to verify preemption running path [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:12:18.481: INFO: pods created so far: [1 1 1]
May  1 01:12:18.481: INFO: length of pods created so far: 3
May  1 01:12:28.491: INFO: pods created so far: [2 2 1]
[AfterEach] PreemptionExecutionPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:12:35.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-path-4111" for this suite.
[AfterEach] PreemptionExecutionPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:12:35.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-4679" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74

• [SLOW TEST:101.819 seconds]
[sig-scheduling] SchedulerPreemption [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  PreemptionExecutionPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428
    runs ReplicaSets to verify preemption running path [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":290,"completed":260,"skipped":4412,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:12:35.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:14:35.708: INFO: Deleting pod "var-expansion-e4e0f8cb-8cdb-4551-a9fe-a91c7ab584b1" in namespace "var-expansion-9187"
May  1 01:14:35.714: INFO: Wait up to 5m0s for pod "var-expansion-e4e0f8cb-8cdb-4551-a9fe-a91c7ab584b1" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:14:39.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9187" for this suite.

• [SLOW TEST:124.154 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":290,"completed":261,"skipped":4434,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:14:39.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May  1 01:14:39.954: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5069367-fb16-451f-b564-36838c234929" in namespace "downward-api-4481" to be "Succeeded or Failed"
May  1 01:14:39.974: INFO: Pod "downwardapi-volume-d5069367-fb16-451f-b564-36838c234929": Phase="Pending", Reason="", readiness=false. Elapsed: 20.848947ms
May  1 01:14:41.979: INFO: Pod "downwardapi-volume-d5069367-fb16-451f-b564-36838c234929": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025115192s
May  1 01:14:43.983: INFO: Pod "downwardapi-volume-d5069367-fb16-451f-b564-36838c234929": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029635805s
STEP: Saw pod success
May  1 01:14:43.983: INFO: Pod "downwardapi-volume-d5069367-fb16-451f-b564-36838c234929" satisfied condition "Succeeded or Failed"
May  1 01:14:43.987: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d5069367-fb16-451f-b564-36838c234929 container client-container: 
STEP: delete the pod
May  1 01:14:44.036: INFO: Waiting for pod downwardapi-volume-d5069367-fb16-451f-b564-36838c234929 to disappear
May  1 01:14:44.065: INFO: Pod downwardapi-volume-d5069367-fb16-451f-b564-36838c234929 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:14:44.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4481" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":290,"completed":262,"skipped":4441,"failed":0}
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:14:44.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: getting the auto-created API token
May  1 01:14:44.723: INFO: created pod pod-service-account-defaultsa
May  1 01:14:44.723: INFO: pod pod-service-account-defaultsa service account token volume mount: true
May  1 01:14:44.735: INFO: created pod pod-service-account-mountsa
May  1 01:14:44.735: INFO: pod pod-service-account-mountsa service account token volume mount: true
May  1 01:14:44.777: INFO: created pod pod-service-account-nomountsa
May  1 01:14:44.777: INFO: pod pod-service-account-nomountsa service account token volume mount: false
May  1 01:14:44.856: INFO: created pod pod-service-account-defaultsa-mountspec
May  1 01:14:44.856: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
May  1 01:14:44.868: INFO: created pod pod-service-account-mountsa-mountspec
May  1 01:14:44.868: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
May  1 01:14:44.923: INFO: created pod pod-service-account-nomountsa-mountspec
May  1 01:14:44.923: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
May  1 01:14:45.000: INFO: created pod pod-service-account-defaultsa-nomountspec
May  1 01:14:45.000: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
May  1 01:14:45.017: INFO: created pod pod-service-account-mountsa-nomountspec
May  1 01:14:45.017: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
May  1 01:14:45.050: INFO: created pod pod-service-account-nomountsa-nomountspec
May  1 01:14:45.050: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:14:45.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5922" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":290,"completed":263,"skipped":4444,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:14:45.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
May  1 01:14:58.110: INFO: Successfully updated pod "pod-update-b1bd1de5-731f-4bc7-9ee9-201ff5d9394d"
STEP: verifying the updated pod is in kubernetes
May  1 01:14:58.350: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:14:58.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3387" for this suite.

• [SLOW TEST:13.387 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":290,"completed":264,"skipped":4455,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:14:58.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating the pod
STEP: waiting for pod running
STEP: creating a file in subpath
May  1 01:15:03.262: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3209 PodName:var-expansion-f87c416e-cb6e-4111-a64f-89d8c41ea0bb ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 01:15:03.262: INFO: >>> kubeConfig: /root/.kube/config
I0501 01:15:03.300327       7 log.go:172] (0xc002eb8370) (0xc001f86be0) Create stream
I0501 01:15:03.300365       7 log.go:172] (0xc002eb8370) (0xc001f86be0) Stream added, broadcasting: 1
I0501 01:15:03.302274       7 log.go:172] (0xc002eb8370) Reply frame received for 1
I0501 01:15:03.302304       7 log.go:172] (0xc002eb8370) (0xc0023fe6e0) Create stream
I0501 01:15:03.302314       7 log.go:172] (0xc002eb8370) (0xc0023fe6e0) Stream added, broadcasting: 3
I0501 01:15:03.303209       7 log.go:172] (0xc002eb8370) Reply frame received for 3
I0501 01:15:03.303249       7 log.go:172] (0xc002eb8370) (0xc0010ad040) Create stream
I0501 01:15:03.303258       7 log.go:172] (0xc002eb8370) (0xc0010ad040) Stream added, broadcasting: 5
I0501 01:15:03.304070       7 log.go:172] (0xc002eb8370) Reply frame received for 5
I0501 01:15:03.380763       7 log.go:172] (0xc002eb8370) Data frame received for 5
I0501 01:15:03.380811       7 log.go:172] (0xc002eb8370) Data frame received for 3
I0501 01:15:03.380854       7 log.go:172] (0xc0023fe6e0) (3) Data frame handling
I0501 01:15:03.380880       7 log.go:172] (0xc0010ad040) (5) Data frame handling
I0501 01:15:03.382622       7 log.go:172] (0xc002eb8370) Data frame received for 1
I0501 01:15:03.382655       7 log.go:172] (0xc001f86be0) (1) Data frame handling
I0501 01:15:03.382675       7 log.go:172] (0xc001f86be0) (1) Data frame sent
I0501 01:15:03.382695       7 log.go:172] (0xc002eb8370) (0xc001f86be0) Stream removed, broadcasting: 1
I0501 01:15:03.382718       7 log.go:172] (0xc002eb8370) Go away received
I0501 01:15:03.382782       7 log.go:172] (0xc002eb8370) (0xc001f86be0) Stream removed, broadcasting: 1
I0501 01:15:03.382810       7 log.go:172] (0xc002eb8370) (0xc0023fe6e0) Stream removed, broadcasting: 3
I0501 01:15:03.382830       7 log.go:172] (0xc002eb8370) (0xc0010ad040) Stream removed, broadcasting: 5
STEP: test for file in mounted path
May  1 01:15:03.386: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3209 PodName:var-expansion-f87c416e-cb6e-4111-a64f-89d8c41ea0bb ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 01:15:03.386: INFO: >>> kubeConfig: /root/.kube/config
I0501 01:15:03.419264       7 log.go:172] (0xc00259b080) (0xc0023fedc0) Create stream
I0501 01:15:03.419291       7 log.go:172] (0xc00259b080) (0xc0023fedc0) Stream added, broadcasting: 1
I0501 01:15:03.421659       7 log.go:172] (0xc00259b080) Reply frame received for 1
I0501 01:15:03.421704       7 log.go:172] (0xc00259b080) (0xc001f86e60) Create stream
I0501 01:15:03.421719       7 log.go:172] (0xc00259b080) (0xc001f86e60) Stream added, broadcasting: 3
I0501 01:15:03.422683       7 log.go:172] (0xc00259b080) Reply frame received for 3
I0501 01:15:03.422722       7 log.go:172] (0xc00259b080) (0xc0023fee60) Create stream
I0501 01:15:03.422735       7 log.go:172] (0xc00259b080) (0xc0023fee60) Stream added, broadcasting: 5
I0501 01:15:03.423566       7 log.go:172] (0xc00259b080) Reply frame received for 5
I0501 01:15:03.484351       7 log.go:172] (0xc00259b080) Data frame received for 3
I0501 01:15:03.484387       7 log.go:172] (0xc00259b080) Data frame received for 5
I0501 01:15:03.484408       7 log.go:172] (0xc0023fee60) (5) Data frame handling
I0501 01:15:03.484436       7 log.go:172] (0xc001f86e60) (3) Data frame handling
I0501 01:15:03.486127       7 log.go:172] (0xc00259b080) Data frame received for 1
I0501 01:15:03.486155       7 log.go:172] (0xc0023fedc0) (1) Data frame handling
I0501 01:15:03.486200       7 log.go:172] (0xc0023fedc0) (1) Data frame sent
I0501 01:15:03.486233       7 log.go:172] (0xc00259b080) (0xc0023fedc0) Stream removed, broadcasting: 1
I0501 01:15:03.486351       7 log.go:172] (0xc00259b080) (0xc0023fedc0) Stream removed, broadcasting: 1
I0501 01:15:03.486369       7 log.go:172] (0xc00259b080) (0xc001f86e60) Stream removed, broadcasting: 3
I0501 01:15:03.486621       7 log.go:172] (0xc00259b080) (0xc0023fee60) Stream removed, broadcasting: 5
STEP: updating the annotation value
I0501 01:15:03.486654       7 log.go:172] (0xc00259b080) Go away received
May  1 01:15:04.096: INFO: Successfully updated pod "var-expansion-f87c416e-cb6e-4111-a64f-89d8c41ea0bb"
STEP: waiting for annotated pod running
STEP: deleting the pod gracefully
May  1 01:15:04.294: INFO: Deleting pod "var-expansion-f87c416e-cb6e-4111-a64f-89d8c41ea0bb" in namespace "var-expansion-3209"
May  1 01:15:04.299: INFO: Wait up to 5m0s for pod "var-expansion-f87c416e-cb6e-4111-a64f-89d8c41ea0bb" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:15:46.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3209" for this suite.

• [SLOW TEST:47.748 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":290,"completed":265,"skipped":4472,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:15:46.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-upd-380a9513-47b6-47e5-95a2-f662598afe97
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-380a9513-47b6-47e5-95a2-f662598afe97
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:15:52.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8353" for this suite.

• [SLOW TEST:6.218 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":290,"completed":266,"skipped":4482,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:15:52.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-68b7c4c0-0e0b-40cf-ab1d-763c99967627
STEP: Creating a pod to test consume secrets
May  1 01:15:52.656: INFO: Waiting up to 5m0s for pod "pod-secrets-c132898d-b76a-40ca-910b-4caca267b0b9" in namespace "secrets-7770" to be "Succeeded or Failed"
May  1 01:15:52.680: INFO: Pod "pod-secrets-c132898d-b76a-40ca-910b-4caca267b0b9": Phase="Pending", Reason="", readiness=false. Elapsed: 23.791339ms
May  1 01:15:54.685: INFO: Pod "pod-secrets-c132898d-b76a-40ca-910b-4caca267b0b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028380885s
May  1 01:15:56.699: INFO: Pod "pod-secrets-c132898d-b76a-40ca-910b-4caca267b0b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042807419s
STEP: Saw pod success
May  1 01:15:56.699: INFO: Pod "pod-secrets-c132898d-b76a-40ca-910b-4caca267b0b9" satisfied condition "Succeeded or Failed"
May  1 01:15:56.702: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-c132898d-b76a-40ca-910b-4caca267b0b9 container secret-volume-test: 
STEP: delete the pod
May  1 01:15:56.744: INFO: Waiting for pod pod-secrets-c132898d-b76a-40ca-910b-4caca267b0b9 to disappear
May  1 01:15:56.758: INFO: Pod pod-secrets-c132898d-b76a-40ca-910b-4caca267b0b9 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:15:56.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7770" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":267,"skipped":4509,"failed":0}
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:15:56.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Performing setup for networking test in namespace pod-network-test-8416
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  1 01:15:57.050: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  1 01:15:57.098: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  1 01:15:59.102: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  1 01:16:01.102: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 01:16:03.101: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 01:16:05.101: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 01:16:07.102: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 01:16:09.102: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 01:16:11.103: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 01:16:13.102: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 01:16:15.103: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 01:16:17.103: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 01:16:19.102: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  1 01:16:19.108: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  1 01:16:21.112: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  1 01:16:27.262: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.216 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8416 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 01:16:27.262: INFO: >>> kubeConfig: /root/.kube/config
I0501 01:16:27.298212       7 log.go:172] (0xc0027aa370) (0xc00246e460) Create stream
I0501 01:16:27.298241       7 log.go:172] (0xc0027aa370) (0xc00246e460) Stream added, broadcasting: 1
I0501 01:16:27.299931       7 log.go:172] (0xc0027aa370) Reply frame received for 1
I0501 01:16:27.299983       7 log.go:172] (0xc0027aa370) (0xc001bc90e0) Create stream
I0501 01:16:27.299995       7 log.go:172] (0xc0027aa370) (0xc001bc90e0) Stream added, broadcasting: 3
I0501 01:16:27.300920       7 log.go:172] (0xc0027aa370) Reply frame received for 3
I0501 01:16:27.300977       7 log.go:172] (0xc0027aa370) (0xc0013e2320) Create stream
I0501 01:16:27.301005       7 log.go:172] (0xc0027aa370) (0xc0013e2320) Stream added, broadcasting: 5
I0501 01:16:27.302022       7 log.go:172] (0xc0027aa370) Reply frame received for 5
I0501 01:16:28.467253       7 log.go:172] (0xc0027aa370) Data frame received for 3
I0501 01:16:28.467285       7 log.go:172] (0xc001bc90e0) (3) Data frame handling
I0501 01:16:28.467299       7 log.go:172] (0xc001bc90e0) (3) Data frame sent
I0501 01:16:28.467347       7 log.go:172] (0xc0027aa370) Data frame received for 5
I0501 01:16:28.467425       7 log.go:172] (0xc0013e2320) (5) Data frame handling
I0501 01:16:28.467477       7 log.go:172] (0xc0027aa370) Data frame received for 3
I0501 01:16:28.467511       7 log.go:172] (0xc001bc90e0) (3) Data frame handling
I0501 01:16:28.469752       7 log.go:172] (0xc0027aa370) Data frame received for 1
I0501 01:16:28.469788       7 log.go:172] (0xc00246e460) (1) Data frame handling
I0501 01:16:28.469826       7 log.go:172] (0xc00246e460) (1) Data frame sent
I0501 01:16:28.469899       7 log.go:172] (0xc0027aa370) (0xc00246e460) Stream removed, broadcasting: 1
I0501 01:16:28.470015       7 log.go:172] (0xc0027aa370) Go away received
I0501 01:16:28.470087       7 log.go:172] (0xc0027aa370) (0xc00246e460) Stream removed, broadcasting: 1
I0501 01:16:28.470115       7 log.go:172] (0xc0027aa370) (0xc001bc90e0) Stream removed, broadcasting: 3
I0501 01:16:28.470134       7 log.go:172] (0xc0027aa370) (0xc0013e2320) Stream removed, broadcasting: 5
May  1 01:16:28.470: INFO: Found all expected endpoints: [netserver-0]
May  1 01:16:28.474: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.181 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8416 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 01:16:28.474: INFO: >>> kubeConfig: /root/.kube/config
I0501 01:16:28.511507       7 log.go:172] (0xc0027aa9a0) (0xc00246e6e0) Create stream
I0501 01:16:28.511535       7 log.go:172] (0xc0027aa9a0) (0xc00246e6e0) Stream added, broadcasting: 1
I0501 01:16:28.513644       7 log.go:172] (0xc0027aa9a0) Reply frame received for 1
I0501 01:16:28.513698       7 log.go:172] (0xc0027aa9a0) (0xc0023ffae0) Create stream
I0501 01:16:28.513715       7 log.go:172] (0xc0027aa9a0) (0xc0023ffae0) Stream added, broadcasting: 3
I0501 01:16:28.514698       7 log.go:172] (0xc0027aa9a0) Reply frame received for 3
I0501 01:16:28.514741       7 log.go:172] (0xc0027aa9a0) (0xc001bc9180) Create stream
I0501 01:16:28.514759       7 log.go:172] (0xc0027aa9a0) (0xc001bc9180) Stream added, broadcasting: 5
I0501 01:16:28.515714       7 log.go:172] (0xc0027aa9a0) Reply frame received for 5
I0501 01:16:29.582639       7 log.go:172] (0xc0027aa9a0) Data frame received for 3
I0501 01:16:29.582680       7 log.go:172] (0xc0023ffae0) (3) Data frame handling
I0501 01:16:29.582705       7 log.go:172] (0xc0023ffae0) (3) Data frame sent
I0501 01:16:29.582756       7 log.go:172] (0xc0027aa9a0) Data frame received for 5
I0501 01:16:29.582814       7 log.go:172] (0xc001bc9180) (5) Data frame handling
I0501 01:16:29.582867       7 log.go:172] (0xc0027aa9a0) Data frame received for 3
I0501 01:16:29.582891       7 log.go:172] (0xc0023ffae0) (3) Data frame handling
I0501 01:16:29.584742       7 log.go:172] (0xc0027aa9a0) Data frame received for 1
I0501 01:16:29.584847       7 log.go:172] (0xc00246e6e0) (1) Data frame handling
I0501 01:16:29.584884       7 log.go:172] (0xc00246e6e0) (1) Data frame sent
I0501 01:16:29.584909       7 log.go:172] (0xc0027aa9a0) (0xc00246e6e0) Stream removed, broadcasting: 1
I0501 01:16:29.585024       7 log.go:172] (0xc0027aa9a0) (0xc00246e6e0) Stream removed, broadcasting: 1
I0501 01:16:29.585082       7 log.go:172] (0xc0027aa9a0) (0xc0023ffae0) Stream removed, broadcasting: 3
I0501 01:16:29.585333       7 log.go:172] (0xc0027aa9a0) (0xc001bc9180) Stream removed, broadcasting: 5
May  1 01:16:29.585: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:16:29.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0501 01:16:29.585456       7 log.go:172] (0xc0027aa9a0) Go away received
STEP: Destroying namespace "pod-network-test-8416" for this suite.

• [SLOW TEST:32.828 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":268,"skipped":4513,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:16:29.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  1 01:16:30.128: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  1 01:16:32.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892590, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892590, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892590, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892590, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  1 01:16:35.198: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:16:35.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7185-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:16:37.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1929" for this suite.
STEP: Destroying namespace "webhook-1929-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.612 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":290,"completed":269,"skipped":4523,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:16:37.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
May  1 01:16:44.841: INFO: 0 pods remaining
May  1 01:16:44.841: INFO: 0 pods has nil DeletionTimestamp
May  1 01:16:44.841: INFO: 
STEP: Gathering metrics
W0501 01:16:46.313726       7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May  1 01:16:46.313: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:16:46.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7816" for this suite.

• [SLOW TEST:9.183 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":290,"completed":270,"skipped":4533,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:16:46.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:16:51.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-651" for this suite.

• [SLOW TEST:5.171 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":290,"completed":271,"skipped":4537,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:16:51.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Performing setup for networking test in namespace pod-network-test-7790
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  1 01:16:51.889: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  1 01:16:52.037: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  1 01:16:54.079: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  1 01:16:56.132: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  1 01:16:58.042: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 01:17:00.042: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 01:17:02.041: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 01:17:04.041: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 01:17:06.041: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 01:17:08.041: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 01:17:10.041: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  1 01:17:10.048: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  1 01:17:12.063: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  1 01:17:16.172: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.224:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7790 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 01:17:16.172: INFO: >>> kubeConfig: /root/.kube/config
I0501 01:17:16.200669       7 log.go:172] (0xc002b87290) (0xc00152efa0) Create stream
I0501 01:17:16.200699       7 log.go:172] (0xc002b87290) (0xc00152efa0) Stream added, broadcasting: 1
I0501 01:17:16.202484       7 log.go:172] (0xc002b87290) Reply frame received for 1
I0501 01:17:16.202522       7 log.go:172] (0xc002b87290) (0xc000fce000) Create stream
I0501 01:17:16.202530       7 log.go:172] (0xc002b87290) (0xc000fce000) Stream added, broadcasting: 3
I0501 01:17:16.203514       7 log.go:172] (0xc002b87290) Reply frame received for 3
I0501 01:17:16.203534       7 log.go:172] (0xc002b87290) (0xc00152f540) Create stream
I0501 01:17:16.203539       7 log.go:172] (0xc002b87290) (0xc00152f540) Stream added, broadcasting: 5
I0501 01:17:16.204586       7 log.go:172] (0xc002b87290) Reply frame received for 5
I0501 01:17:16.280125       7 log.go:172] (0xc002b87290) Data frame received for 5
I0501 01:17:16.280184       7 log.go:172] (0xc00152f540) (5) Data frame handling
I0501 01:17:16.280213       7 log.go:172] (0xc002b87290) Data frame received for 3
I0501 01:17:16.280226       7 log.go:172] (0xc000fce000) (3) Data frame handling
I0501 01:17:16.280241       7 log.go:172] (0xc000fce000) (3) Data frame sent
I0501 01:17:16.280255       7 log.go:172] (0xc002b87290) Data frame received for 3
I0501 01:17:16.280267       7 log.go:172] (0xc000fce000) (3) Data frame handling
I0501 01:17:16.282116       7 log.go:172] (0xc002b87290) Data frame received for 1
I0501 01:17:16.282144       7 log.go:172] (0xc00152efa0) (1) Data frame handling
I0501 01:17:16.282159       7 log.go:172] (0xc00152efa0) (1) Data frame sent
I0501 01:17:16.282175       7 log.go:172] (0xc002b87290) (0xc00152efa0) Stream removed, broadcasting: 1
I0501 01:17:16.282193       7 log.go:172] (0xc002b87290) Go away received
I0501 01:17:16.282331       7 log.go:172] (0xc002b87290) (0xc00152efa0) Stream removed, broadcasting: 1
I0501 01:17:16.282343       7 log.go:172] (0xc002b87290) (0xc000fce000) Stream removed, broadcasting: 3
I0501 01:17:16.282349       7 log.go:172] (0xc002b87290) (0xc00152f540) Stream removed, broadcasting: 5
May  1 01:17:16.282: INFO: Found all expected endpoints: [netserver-0]
May  1 01:17:16.285: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.188:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7790 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 01:17:16.285: INFO: >>> kubeConfig: /root/.kube/config
I0501 01:17:16.320759       7 log.go:172] (0xc002e4c420) (0xc000c8d5e0) Create stream
I0501 01:17:16.320798       7 log.go:172] (0xc002e4c420) (0xc000c8d5e0) Stream added, broadcasting: 1
I0501 01:17:16.323027       7 log.go:172] (0xc002e4c420) Reply frame received for 1
I0501 01:17:16.323068       7 log.go:172] (0xc002e4c420) (0xc0013c15e0) Create stream
I0501 01:17:16.323083       7 log.go:172] (0xc002e4c420) (0xc0013c15e0) Stream added, broadcasting: 3
I0501 01:17:16.323994       7 log.go:172] (0xc002e4c420) Reply frame received for 3
I0501 01:17:16.324037       7 log.go:172] (0xc002e4c420) (0xc000c8d720) Create stream
I0501 01:17:16.324052       7 log.go:172] (0xc002e4c420) (0xc000c8d720) Stream added, broadcasting: 5
I0501 01:17:16.324896       7 log.go:172] (0xc002e4c420) Reply frame received for 5
I0501 01:17:16.399752       7 log.go:172] (0xc002e4c420) Data frame received for 3
I0501 01:17:16.399789       7 log.go:172] (0xc0013c15e0) (3) Data frame handling
I0501 01:17:16.399807       7 log.go:172] (0xc0013c15e0) (3) Data frame sent
I0501 01:17:16.399980       7 log.go:172] (0xc002e4c420) Data frame received for 5
I0501 01:17:16.400000       7 log.go:172] (0xc000c8d720) (5) Data frame handling
I0501 01:17:16.400234       7 log.go:172] (0xc002e4c420) Data frame received for 3
I0501 01:17:16.400271       7 log.go:172] (0xc0013c15e0) (3) Data frame handling
I0501 01:17:16.401926       7 log.go:172] (0xc002e4c420) Data frame received for 1
I0501 01:17:16.401991       7 log.go:172] (0xc000c8d5e0) (1) Data frame handling
I0501 01:17:16.402010       7 log.go:172] (0xc000c8d5e0) (1) Data frame sent
I0501 01:17:16.402023       7 log.go:172] (0xc002e4c420) (0xc000c8d5e0) Stream removed, broadcasting: 1
I0501 01:17:16.402036       7 log.go:172] (0xc002e4c420) Go away received
I0501 01:17:16.402116       7 log.go:172] (0xc002e4c420) (0xc000c8d5e0) Stream removed, broadcasting: 1
I0501 01:17:16.402146       7 log.go:172] (0xc002e4c420) (0xc0013c15e0) Stream removed, broadcasting: 3
I0501 01:17:16.402168       7 log.go:172] (0xc002e4c420) (0xc000c8d720) Stream removed, broadcasting: 5
May  1 01:17:16.402: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:17:16.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7790" for this suite.

• [SLOW TEST:24.850 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":272,"skipped":4556,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:17:16.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  1 01:17:17.150: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  1 01:17:19.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892637, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892637, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892637, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892637, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  1 01:17:22.241: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:17:32.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6177" for this suite.
STEP: Destroying namespace "webhook-6177-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.221 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":290,"completed":273,"skipped":4564,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:17:32.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:17:32.753: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:17:33.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8182" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":290,"completed":274,"skipped":4609,"failed":0}

------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:17:33.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on tmpfs
May  1 01:17:33.892: INFO: Waiting up to 5m0s for pod "pod-6e716e78-db51-4256-ba56-73eb167cd6b5" in namespace "emptydir-9855" to be "Succeeded or Failed"
May  1 01:17:33.907: INFO: Pod "pod-6e716e78-db51-4256-ba56-73eb167cd6b5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.39053ms
May  1 01:17:35.912: INFO: Pod "pod-6e716e78-db51-4256-ba56-73eb167cd6b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019765055s
May  1 01:17:37.965: INFO: Pod "pod-6e716e78-db51-4256-ba56-73eb167cd6b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073424784s
STEP: Saw pod success
May  1 01:17:37.965: INFO: Pod "pod-6e716e78-db51-4256-ba56-73eb167cd6b5" satisfied condition "Succeeded or Failed"
May  1 01:17:37.969: INFO: Trying to get logs from node latest-worker2 pod pod-6e716e78-db51-4256-ba56-73eb167cd6b5 container test-container: 
STEP: delete the pod
May  1 01:17:38.008: INFO: Waiting for pod pod-6e716e78-db51-4256-ba56-73eb167cd6b5 to disappear
May  1 01:17:38.018: INFO: Pod pod-6e716e78-db51-4256-ba56-73eb167cd6b5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:17:38.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9855" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":275,"skipped":4609,"failed":0}

------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:17:38.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating a replication controller
May  1 01:17:38.065: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9008'
May  1 01:17:38.389: INFO: stderr: ""
May  1 01:17:38.389: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May  1 01:17:38.389: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9008'
May  1 01:17:38.553: INFO: stderr: ""
May  1 01:17:38.554: INFO: stdout: "update-demo-nautilus-lr6pg update-demo-nautilus-r4hj4 "
May  1 01:17:38.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lr6pg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9008'
May  1 01:17:38.679: INFO: stderr: ""
May  1 01:17:38.679: INFO: stdout: ""
May  1 01:17:38.679: INFO: update-demo-nautilus-lr6pg is created but not running
May  1 01:17:43.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9008'
May  1 01:17:43.791: INFO: stderr: ""
May  1 01:17:43.791: INFO: stdout: "update-demo-nautilus-lr6pg update-demo-nautilus-r4hj4 "
May  1 01:17:43.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lr6pg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9008'
May  1 01:17:43.888: INFO: stderr: ""
May  1 01:17:43.888: INFO: stdout: "true"
May  1 01:17:43.888: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lr6pg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9008'
May  1 01:17:43.989: INFO: stderr: ""
May  1 01:17:43.989: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  1 01:17:43.989: INFO: validating pod update-demo-nautilus-lr6pg
May  1 01:17:43.992: INFO: got data: {
  "image": "nautilus.jpg"
}

May  1 01:17:43.992: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  1 01:17:43.992: INFO: update-demo-nautilus-lr6pg is verified up and running
May  1 01:17:43.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r4hj4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9008'
May  1 01:17:44.086: INFO: stderr: ""
May  1 01:17:44.086: INFO: stdout: "true"
May  1 01:17:44.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r4hj4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9008'
May  1 01:17:44.176: INFO: stderr: ""
May  1 01:17:44.176: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  1 01:17:44.176: INFO: validating pod update-demo-nautilus-r4hj4
May  1 01:17:44.180: INFO: got data: {
  "image": "nautilus.jpg"
}

May  1 01:17:44.180: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  1 01:17:44.180: INFO: update-demo-nautilus-r4hj4 is verified up and running
STEP: scaling down the replication controller
May  1 01:17:44.182: INFO: scanned /root for discovery docs: 
May  1 01:17:44.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9008'
May  1 01:17:45.309: INFO: stderr: ""
May  1 01:17:45.309: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May  1 01:17:45.309: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9008'
May  1 01:17:45.424: INFO: stderr: ""
May  1 01:17:45.424: INFO: stdout: "update-demo-nautilus-lr6pg update-demo-nautilus-r4hj4 "
STEP: Replicas for name=update-demo: expected=1 actual=2
May  1 01:17:50.424: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9008'
May  1 01:17:50.535: INFO: stderr: ""
May  1 01:17:50.535: INFO: stdout: "update-demo-nautilus-lr6pg update-demo-nautilus-r4hj4 "
STEP: Replicas for name=update-demo: expected=1 actual=2
May  1 01:17:55.535: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9008'
May  1 01:17:55.645: INFO: stderr: ""
May  1 01:17:55.645: INFO: stdout: "update-demo-nautilus-r4hj4 "
May  1 01:17:55.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r4hj4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9008'
May  1 01:17:55.732: INFO: stderr: ""
May  1 01:17:55.732: INFO: stdout: "true"
May  1 01:17:55.732: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r4hj4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9008'
May  1 01:17:55.831: INFO: stderr: ""
May  1 01:17:55.832: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  1 01:17:55.832: INFO: validating pod update-demo-nautilus-r4hj4
May  1 01:17:55.835: INFO: got data: {
  "image": "nautilus.jpg"
}

May  1 01:17:55.835: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  1 01:17:55.835: INFO: update-demo-nautilus-r4hj4 is verified up and running
STEP: scaling up the replication controller
May  1 01:17:55.838: INFO: scanned /root for discovery docs: 
May  1 01:17:55.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9008'
May  1 01:17:56.964: INFO: stderr: ""
May  1 01:17:56.964: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May  1 01:17:56.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9008'
May  1 01:17:57.073: INFO: stderr: ""
May  1 01:17:57.073: INFO: stdout: "update-demo-nautilus-ntvks update-demo-nautilus-r4hj4 "
May  1 01:17:57.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ntvks -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9008'
May  1 01:17:57.172: INFO: stderr: ""
May  1 01:17:57.172: INFO: stdout: ""
May  1 01:17:57.172: INFO: update-demo-nautilus-ntvks is created but not running
May  1 01:18:02.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9008'
May  1 01:18:02.289: INFO: stderr: ""
May  1 01:18:02.289: INFO: stdout: "update-demo-nautilus-ntvks update-demo-nautilus-r4hj4 "
May  1 01:18:02.289: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ntvks -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9008'
May  1 01:18:02.381: INFO: stderr: ""
May  1 01:18:02.381: INFO: stdout: "true"
May  1 01:18:02.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ntvks -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9008'
May  1 01:18:02.476: INFO: stderr: ""
May  1 01:18:02.476: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  1 01:18:02.476: INFO: validating pod update-demo-nautilus-ntvks
May  1 01:18:02.479: INFO: got data: {
  "image": "nautilus.jpg"
}

May  1 01:18:02.480: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  1 01:18:02.480: INFO: update-demo-nautilus-ntvks is verified up and running
May  1 01:18:02.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r4hj4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9008'
May  1 01:18:02.583: INFO: stderr: ""
May  1 01:18:02.583: INFO: stdout: "true"
May  1 01:18:02.583: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r4hj4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9008'
May  1 01:18:02.683: INFO: stderr: ""
May  1 01:18:02.683: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  1 01:18:02.683: INFO: validating pod update-demo-nautilus-r4hj4
May  1 01:18:02.687: INFO: got data: {
  "image": "nautilus.jpg"
}

May  1 01:18:02.687: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  1 01:18:02.687: INFO: update-demo-nautilus-r4hj4 is verified up and running
STEP: using delete to clean up resources
May  1 01:18:02.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9008'
May  1 01:18:02.807: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  1 01:18:02.807: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
May  1 01:18:02.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9008'
May  1 01:18:02.918: INFO: stderr: "No resources found in kubectl-9008 namespace.\n"
May  1 01:18:02.918: INFO: stdout: ""
May  1 01:18:02.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9008 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May  1 01:18:03.019: INFO: stderr: ""
May  1 01:18:03.019: INFO: stdout: "update-demo-nautilus-ntvks\nupdate-demo-nautilus-r4hj4\n"
May  1 01:18:03.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9008'
May  1 01:18:03.615: INFO: stderr: "No resources found in kubectl-9008 namespace.\n"
May  1 01:18:03.615: INFO: stdout: ""
May  1 01:18:03.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9008 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May  1 01:18:03.721: INFO: stderr: ""
May  1 01:18:03.721: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:18:03.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9008" for this suite.

• [SLOW TEST:25.704 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":290,"completed":276,"skipped":4609,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:18:03.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May  1 01:18:04.057: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25ba26f6-145f-4af6-a908-91f857453bd1" in namespace "projected-6268" to be "Succeeded or Failed"
May  1 01:18:04.127: INFO: Pod "downwardapi-volume-25ba26f6-145f-4af6-a908-91f857453bd1": Phase="Pending", Reason="", readiness=false. Elapsed: 69.272175ms
May  1 01:18:06.130: INFO: Pod "downwardapi-volume-25ba26f6-145f-4af6-a908-91f857453bd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072553896s
May  1 01:18:08.133: INFO: Pod "downwardapi-volume-25ba26f6-145f-4af6-a908-91f857453bd1": Phase="Running", Reason="", readiness=true. Elapsed: 4.075910527s
May  1 01:18:10.137: INFO: Pod "downwardapi-volume-25ba26f6-145f-4af6-a908-91f857453bd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079756046s
STEP: Saw pod success
May  1 01:18:10.137: INFO: Pod "downwardapi-volume-25ba26f6-145f-4af6-a908-91f857453bd1" satisfied condition "Succeeded or Failed"
May  1 01:18:10.140: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-25ba26f6-145f-4af6-a908-91f857453bd1 container client-container: 
STEP: delete the pod
May  1 01:18:10.230: INFO: Waiting for pod downwardapi-volume-25ba26f6-145f-4af6-a908-91f857453bd1 to disappear
May  1 01:18:10.243: INFO: Pod downwardapi-volume-25ba26f6-145f-4af6-a908-91f857453bd1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:18:10.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6268" for this suite.

• [SLOW TEST:6.522 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":290,"completed":277,"skipped":4626,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:18:10.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-9c7b38fe-b632-4369-80ba-2f950ac01d42
STEP: Creating a pod to test consume secrets
May  1 01:18:10.365: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-46ed87bd-61a0-4333-a5b8-3f8f671495f1" in namespace "projected-4608" to be "Succeeded or Failed"
May  1 01:18:10.380: INFO: Pod "pod-projected-secrets-46ed87bd-61a0-4333-a5b8-3f8f671495f1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.053294ms
May  1 01:18:12.499: INFO: Pod "pod-projected-secrets-46ed87bd-61a0-4333-a5b8-3f8f671495f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133632922s
May  1 01:18:14.522: INFO: Pod "pod-projected-secrets-46ed87bd-61a0-4333-a5b8-3f8f671495f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.15746465s
STEP: Saw pod success
May  1 01:18:14.523: INFO: Pod "pod-projected-secrets-46ed87bd-61a0-4333-a5b8-3f8f671495f1" satisfied condition "Succeeded or Failed"
May  1 01:18:14.526: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-46ed87bd-61a0-4333-a5b8-3f8f671495f1 container projected-secret-volume-test: 
STEP: delete the pod
May  1 01:18:14.680: INFO: Waiting for pod pod-projected-secrets-46ed87bd-61a0-4333-a5b8-3f8f671495f1 to disappear
May  1 01:18:14.684: INFO: Pod pod-projected-secrets-46ed87bd-61a0-4333-a5b8-3f8f671495f1 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:18:14.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4608" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":290,"completed":278,"skipped":4627,"failed":0}
S
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:18:14.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:18:14.843: INFO: Creating deployment "test-recreate-deployment"
May  1 01:18:14.848: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
May  1 01:18:14.971: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
May  1 01:18:16.982: INFO: Waiting deployment "test-recreate-deployment" to complete
May  1 01:18:16.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892694, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892694, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892695, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892694, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 01:18:18.989: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
May  1 01:18:18.998: INFO: Updating deployment test-recreate-deployment
May  1 01:18:18.998: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71
May  1 01:18:19.783: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-3347 /apis/apps/v1/namespaces/deployment-3347/deployments/test-recreate-deployment 35d411a7-f161-4fdd-9607-76cc16da7857 473062 2 2020-05-01 01:18:14 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-01 01:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-01 01:18:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b13ce8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-01 01:18:19 +0000 UTC,LastTransitionTime:2020-05-01 01:18:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-01 01:18:19 +0000 UTC,LastTransitionTime:2020-05-01 01:18:14 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

May  1 01:18:19.799: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7  deployment-3347 /apis/apps/v1/namespaces/deployment-3347/replicasets/test-recreate-deployment-d5667d9c7 29fc1fa6-8857-4215-a1b9-20a2f915c0e2 473060 1 2020-05-01 01:18:19 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 35d411a7-f161-4fdd-9607-76cc16da7857 0xc003fc8200 0xc003fc8201}] []  [{kube-controller-manager Update apps/v1 2020-05-01 01:18:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35d411a7-f161-4fdd-9607-76cc16da7857\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003fc8298  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  1 01:18:19.800: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
May  1 01:18:19.800: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8  deployment-3347 /apis/apps/v1/namespaces/deployment-3347/replicasets/test-recreate-deployment-6d65b9f6d8 bf6cad6e-8589-471e-b3c2-1cc488062369 473051 2 2020-05-01 01:18:14 +0000 UTC   map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 35d411a7-f161-4fdd-9607-76cc16da7857 0xc003fc8107 0xc003fc8108}] []  [{kube-controller-manager Update apps/v1 2020-05-01 01:18:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35d411a7-f161-4fdd-9607-76cc16da7857\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003fc8198  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  1 01:18:19.847: INFO: Pod "test-recreate-deployment-d5667d9c7-sxt2b" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-sxt2b test-recreate-deployment-d5667d9c7- deployment-3347 /api/v1/namespaces/deployment-3347/pods/test-recreate-deployment-d5667d9c7-sxt2b d2fe7552-a1bc-4522-9fca-401787b7c828 473065 0 2020-05-01 01:18:19 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 29fc1fa6-8857-4215-a1b9-20a2f915c0e2 0xc003fc8760 0xc003fc8761}] []  [{kube-controller-manager Update v1 2020-05-01 01:18:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"29fc1fa6-8857-4215-a1b9-20a2f915c0e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:18:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rg4wg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rg4wg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rg4wg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:18:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:18:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:18:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:18:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-01 01:18:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:18:19.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3347" for this suite.

• [SLOW TEST:5.195 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":290,"completed":279,"skipped":4628,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:18:19.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0501 01:19:00.930273       7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May  1 01:19:00.930: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:19:00.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4018" for this suite.

• [SLOW TEST:41.051 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":290,"completed":280,"skipped":4644,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:19:00.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating the pod
May  1 01:19:05.584: INFO: Successfully updated pod "labelsupdate9972c670-3e2c-4b2a-a08b-e18f4b1ef759"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:19:09.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6755" for this suite.

• [SLOW TEST:8.849 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":290,"completed":281,"skipped":4664,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:19:09.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May  1 01:19:10.625: INFO: Waiting up to 5m0s for pod "downwardapi-volume-843231c9-3374-4c29-a195-65d8d84379ca" in namespace "downward-api-5545" to be "Succeeded or Failed"
May  1 01:19:10.666: INFO: Pod "downwardapi-volume-843231c9-3374-4c29-a195-65d8d84379ca": Phase="Pending", Reason="", readiness=false. Elapsed: 40.791727ms
May  1 01:19:12.684: INFO: Pod "downwardapi-volume-843231c9-3374-4c29-a195-65d8d84379ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058107173s
May  1 01:19:14.688: INFO: Pod "downwardapi-volume-843231c9-3374-4c29-a195-65d8d84379ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062586437s
STEP: Saw pod success
May  1 01:19:14.688: INFO: Pod "downwardapi-volume-843231c9-3374-4c29-a195-65d8d84379ca" satisfied condition "Succeeded or Failed"
May  1 01:19:14.691: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-843231c9-3374-4c29-a195-65d8d84379ca container client-container: 
STEP: delete the pod
May  1 01:19:14.798: INFO: Waiting for pod downwardapi-volume-843231c9-3374-4c29-a195-65d8d84379ca to disappear
May  1 01:19:14.801: INFO: Pod downwardapi-volume-843231c9-3374-4c29-a195-65d8d84379ca no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:19:14.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5545" for this suite.

• [SLOW TEST:5.022 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":290,"completed":282,"skipped":4668,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:19:14.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:19:14.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
May  1 01:19:15.480: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-01T01:19:15Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-01T01:19:15Z]] name:name1 resourceVersion:473492 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9692a5d1-236e-4626-a05f-55c6f635d4fb] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
May  1 01:19:25.512: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-01T01:19:25Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-01T01:19:25Z]] name:name2 resourceVersion:473537 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:88a71f3b-c7df-49b2-9dce-948d05f91499] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
May  1 01:19:35.519: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-01T01:19:15Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-01T01:19:35Z]] name:name1 resourceVersion:473567 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9692a5d1-236e-4626-a05f-55c6f635d4fb] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
May  1 01:19:45.527: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-01T01:19:25Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-01T01:19:45Z]] name:name2 resourceVersion:473597 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:88a71f3b-c7df-49b2-9dce-948d05f91499] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
May  1 01:19:55.536: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-01T01:19:15Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-01T01:19:35Z]] name:name1 resourceVersion:473627 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9692a5d1-236e-4626-a05f-55c6f635d4fb] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
May  1 01:20:05.545: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-01T01:19:25Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-01T01:19:45Z]] name:name2 resourceVersion:473655 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:88a71f3b-c7df-49b2-9dce-948d05f91499] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:20:16.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-8672" for this suite.

• [SLOW TEST:61.255 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":290,"completed":283,"skipped":4679,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:20:16.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating pod busybox-dd56ba60-2ce3-4e11-aef5-a223e9ef5411 in namespace container-probe-2803
May  1 01:20:20.305: INFO: Started pod busybox-dd56ba60-2ce3-4e11-aef5-a223e9ef5411 in namespace container-probe-2803
STEP: checking the pod's current state and verifying that restartCount is present
May  1 01:20:20.308: INFO: Initial restart count of pod busybox-dd56ba60-2ce3-4e11-aef5-a223e9ef5411 is 0
May  1 01:21:12.530: INFO: Restart count of pod container-probe-2803/busybox-dd56ba60-2ce3-4e11-aef5-a223e9ef5411 is now 1 (52.22131279s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:21:12.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2803" for this suite.

• [SLOW TEST:56.513 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":290,"completed":284,"skipped":4686,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:21:12.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:21:43.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9" for this suite.
STEP: Destroying namespace "nsdeletetest-5249" for this suite.
May  1 01:21:43.924: INFO: Namespace nsdeletetest-5249 was already deleted
STEP: Destroying namespace "nsdeletetest-2944" for this suite.

• [SLOW TEST:31.349 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":290,"completed":285,"skipped":4708,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:21:43.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
May  1 01:21:43.994: INFO: Waiting up to 5m0s for pod "downward-api-623a16f8-37a7-47d5-b721-e3b4d38a60d7" in namespace "downward-api-7157" to be "Succeeded or Failed"
May  1 01:21:44.051: INFO: Pod "downward-api-623a16f8-37a7-47d5-b721-e3b4d38a60d7": Phase="Pending", Reason="", readiness=false. Elapsed: 57.12521ms
May  1 01:21:46.055: INFO: Pod "downward-api-623a16f8-37a7-47d5-b721-e3b4d38a60d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060798559s
May  1 01:21:48.059: INFO: Pod "downward-api-623a16f8-37a7-47d5-b721-e3b4d38a60d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065275539s
STEP: Saw pod success
May  1 01:21:48.059: INFO: Pod "downward-api-623a16f8-37a7-47d5-b721-e3b4d38a60d7" satisfied condition "Succeeded or Failed"
May  1 01:21:48.063: INFO: Trying to get logs from node latest-worker2 pod downward-api-623a16f8-37a7-47d5-b721-e3b4d38a60d7 container dapi-container: 
STEP: delete the pod
May  1 01:21:48.149: INFO: Waiting for pod downward-api-623a16f8-37a7-47d5-b721-e3b4d38a60d7 to disappear
May  1 01:21:48.153: INFO: Pod downward-api-623a16f8-37a7-47d5-b721-e3b4d38a60d7 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:21:48.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7157" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":290,"completed":286,"skipped":4737,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:21:48.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
May  1 01:21:48.225: INFO: Pod name rollover-pod: Found 0 pods out of 1
May  1 01:21:53.255: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
May  1 01:21:53.255: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
May  1 01:21:55.278: INFO: Creating deployment "test-rollover-deployment"
May  1 01:21:55.290: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
May  1 01:21:57.295: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
May  1 01:21:57.301: INFO: Ensure that both replica sets have 1 created replica
May  1 01:21:57.306: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
May  1 01:21:57.314: INFO: Updating deployment test-rollover-deployment
May  1 01:21:57.314: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
May  1 01:21:59.337: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
May  1 01:21:59.342: INFO: Make sure deployment "test-rollover-deployment" is complete
May  1 01:21:59.348: INFO: all replica sets need to contain the pod-template-hash label
May  1 01:21:59.348: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892915, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892915, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892917, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892915, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 01:22:01.363: INFO: all replica sets need to contain the pod-template-hash label
May  1 01:22:01.363: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892915, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892915, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892920, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892915, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 01:22:03.400: INFO: all replica sets need to contain the pod-template-hash label
May  1 01:22:03.400: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892915, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892915, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892920, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892915, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 01:22:05.355: INFO: all replica sets need to contain the pod-template-hash label
May  1 01:22:05.355: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892915, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892915, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892920, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892915, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 01:22:07.356: INFO: all replica sets need to contain the pod-template-hash label
May  1 01:22:07.356: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892915, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892915, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892920, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892915, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 01:22:09.356: INFO: all replica sets need to contain the pod-template-hash label
May  1 01:22:09.356: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892915, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892915, loc:(*time.Location)(0x7c48300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892920, loc:(*time.Location)(0x7c48300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723892915, loc:(*time.Location)(0x7c48300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 01:22:11.355: INFO: 
May  1 01:22:11.355: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71
May  1 01:22:11.364: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-3524 /apis/apps/v1/namespaces/deployment-3524/deployments/test-rollover-deployment f6cfcef3-9e84-4f6a-8de5-a1e385780295 474207 2 2020-05-01 01:21:55 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-01 01:21:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-01 01:22:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003631568  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-01 01:21:55 +0000 UTC,LastTransitionTime:2020-05-01 01:21:55 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-05-01 01:22:10 +0000 UTC,LastTransitionTime:2020-05-01 01:21:55 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

May  1 01:22:11.368: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879  deployment-3524 /apis/apps/v1/namespaces/deployment-3524/replicasets/test-rollover-deployment-7c4fd9c879 2371d19f-bd2f-425a-86b8-1a1cbcedabe9 474197 2 2020-05-01 01:21:57 +0000 UTC   map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment f6cfcef3-9e84-4f6a-8de5-a1e385780295 0xc0030104c7 0xc0030104c8}] []  [{kube-controller-manager Update apps/v1 2020-05-01 01:22:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6cfcef3-9e84-4f6a-8de5-a1e385780295\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003010558  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
May  1 01:22:11.368: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
May  1 01:22:11.368: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-3524 /apis/apps/v1/namespaces/deployment-3524/replicasets/test-rollover-controller e53e537c-95c0-47cd-903a-be016b011e5a 474206 2 2020-05-01 01:21:48 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment f6cfcef3-9e84-4f6a-8de5-a1e385780295 0xc0030102b7 0xc0030102b8}] []  [{e2e.test Update apps/v1 2020-05-01 01:21:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-01 01:22:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6cfcef3-9e84-4f6a-8de5-a1e385780295\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003010358  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  1 01:22:11.368: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-3524 /apis/apps/v1/namespaces/deployment-3524/replicasets/test-rollover-deployment-5686c4cfd5 5ae73ff8-f62f-4baa-8b26-7c85311273de 474150 2 2020-05-01 01:21:55 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment f6cfcef3-9e84-4f6a-8de5-a1e385780295 0xc0030103c7 0xc0030103c8}] []  [{kube-controller-manager Update apps/v1 2020-05-01 01:21:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6cfcef3-9e84-4f6a-8de5-a1e385780295\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003010458  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  1 01:22:11.372: INFO: Pod "test-rollover-deployment-7c4fd9c879-7mxf8" is available:
&Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-7mxf8 test-rollover-deployment-7c4fd9c879- deployment-3524 /api/v1/namespaces/deployment-3524/pods/test-rollover-deployment-7c4fd9c879-7mxf8 e2479a19-3494-4e17-a079-eebc823345a0 474163 0 2020-05-01 01:21:57 +0000 UTC   map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 2371d19f-bd2f-425a-86b8-1a1cbcedabe9 0xc0038f2907 0xc0038f2908}] []  [{kube-controller-manager Update v1 2020-05-01 01:21:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2371d19f-bd2f-425a-86b8-1a1cbcedabe9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-01 01:22:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.206\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ppz4r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ppz4r,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ppz4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:21:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:22:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:22:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 01:21:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.206,StartTime:2020-05-01 01:21:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 01:22:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://ed7ab9a6638c3312226a64c6951e85bf382c2f1f66fe660dc35e238e21b30916,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.206,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:22:11.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3524" for this suite.

• [SLOW TEST:23.276 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":290,"completed":287,"skipped":4747,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:22:11.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
May  1 01:22:11.510: INFO: Waiting up to 5m0s for pod "pod-01278295-bfd4-43ae-b630-ba87fde13e6c" in namespace "emptydir-2666" to be "Succeeded or Failed"
May  1 01:22:11.519: INFO: Pod "pod-01278295-bfd4-43ae-b630-ba87fde13e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.687158ms
May  1 01:22:13.523: INFO: Pod "pod-01278295-bfd4-43ae-b630-ba87fde13e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013228923s
May  1 01:22:15.532: INFO: Pod "pod-01278295-bfd4-43ae-b630-ba87fde13e6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021919834s
STEP: Saw pod success
May  1 01:22:15.532: INFO: Pod "pod-01278295-bfd4-43ae-b630-ba87fde13e6c" satisfied condition "Succeeded or Failed"
May  1 01:22:15.535: INFO: Trying to get logs from node latest-worker2 pod pod-01278295-bfd4-43ae-b630-ba87fde13e6c container test-container: 
STEP: delete the pod
May  1 01:22:15.551: INFO: Waiting for pod pod-01278295-bfd4-43ae-b630-ba87fde13e6c to disappear
May  1 01:22:15.555: INFO: Pod pod-01278295-bfd4-43ae-b630-ba87fde13e6c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:22:15.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2666" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":288,"skipped":4751,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:22:15.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name s-test-opt-del-322e0d19-b46a-431a-84ce-036fe590d0f9
STEP: Creating secret with name s-test-opt-upd-1b5c8c63-ab89-4ba2-8829-2802af5d3ade
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-322e0d19-b46a-431a-84ce-036fe590d0f9
STEP: Updating secret s-test-opt-upd-1b5c8c63-ab89-4ba2-8829-2802af5d3ade
STEP: Creating secret with name s-test-opt-create-9aaeb607-4ff0-4edf-9896-4b088cfd057a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:22:25.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1755" for this suite.

• [SLOW TEST:10.300 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":290,"completed":289,"skipped":4778,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May  1 01:22:25.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating all guestbook components
May  1 01:22:25.946: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

May  1 01:22:25.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1136'
May  1 01:22:28.995: INFO: stderr: ""
May  1 01:22:28.995: INFO: stdout: "service/agnhost-slave created\n"
May  1 01:22:28.995: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

May  1 01:22:28.995: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1136'
May  1 01:22:29.302: INFO: stderr: ""
May  1 01:22:29.302: INFO: stdout: "service/agnhost-master created\n"
May  1 01:22:29.302: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

May  1 01:22:29.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1136'
May  1 01:22:29.641: INFO: stderr: ""
May  1 01:22:29.641: INFO: stdout: "service/frontend created\n"
May  1 01:22:29.641: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

May  1 01:22:29.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1136'
May  1 01:22:29.927: INFO: stderr: ""
May  1 01:22:29.927: INFO: stdout: "deployment.apps/frontend created\n"
May  1 01:22:29.927: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

May  1 01:22:29.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1136'
May  1 01:22:30.250: INFO: stderr: ""
May  1 01:22:30.250: INFO: stdout: "deployment.apps/agnhost-master created\n"
May  1 01:22:30.250: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

May  1 01:22:30.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1136'
May  1 01:22:30.588: INFO: stderr: ""
May  1 01:22:30.588: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
May  1 01:22:30.588: INFO: Waiting for all frontend pods to be Running.
May  1 01:22:40.638: INFO: Waiting for frontend to serve content.
May  1 01:22:40.649: INFO: Trying to add a new entry to the guestbook.
May  1 01:22:40.660: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
May  1 01:22:40.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1136'
May  1 01:22:40.837: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  1 01:22:40.837: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
May  1 01:22:40.837: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1136'
May  1 01:22:40.974: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  1 01:22:40.974: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
May  1 01:22:40.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1136'
May  1 01:22:41.198: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  1 01:22:41.198: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
May  1 01:22:41.198: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1136'
May  1 01:22:41.311: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  1 01:22:41.311: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
May  1 01:22:41.311: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1136'
May  1 01:22:41.576: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  1 01:22:41.576: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
May  1 01:22:41.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1136'
May  1 01:22:41.868: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  1 01:22:41.868: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May  1 01:22:41.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1136" for this suite.

• [SLOW TEST:16.171 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":290,"completed":290,"skipped":4801,"failed":0}
SSMay  1 01:22:42.034: INFO: Running AfterSuite actions on all nodes
May  1 01:22:42.034: INFO: Running AfterSuite actions on node 1
May  1 01:22:42.034: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":290,"completed":290,"skipped":4803,"failed":0}

Ran 290 of 5093 Specs in 6305.397 seconds
SUCCESS! -- 290 Passed | 0 Failed | 0 Pending | 4803 Skipped
PASS