I0810 23:20:23.617112 7 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0810 23:20:23.617334 7 e2e.go:129] Starting e2e run "ecab459a-d7ed-4a36-96c1-e6f041d70e58" on Ginkgo node 1 {"msg":"Test Suite starting","total":303,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597101622 - Will randomize all specs Will run 303 of 5238 specs Aug 10 23:20:23.670: INFO: >>> kubeConfig: /root/.kube/config Aug 10 23:20:23.674: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 10 23:20:23.699: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 10 23:20:23.739: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 10 23:20:23.739: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 10 23:20:23.739: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 10 23:20:23.746: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 10 23:20:23.746: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 10 23:20:23.746: INFO: e2e test version: v1.20.0-alpha.0.523+97c5f1f7632f2d Aug 10 23:20:23.747: INFO: kube-apiserver version: v1.19.0-rc.1 Aug 10 23:20:23.747: INFO: >>> kubeConfig: /root/.kube/config Aug 10 23:20:23.750: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:20:23.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Aug 10 23:20:23.899: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:20:23.998: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-e70f9186-0305-4821-9e2c-1a757c8724bd" in namespace "security-context-test-3256" to be "Succeeded or Failed" Aug 10 23:20:24.001: INFO: Pod "alpine-nnp-false-e70f9186-0305-4821-9e2c-1a757c8724bd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.004485ms Aug 10 23:20:26.005: INFO: Pod "alpine-nnp-false-e70f9186-0305-4821-9e2c-1a757c8724bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007440616s Aug 10 23:20:28.058: INFO: Pod "alpine-nnp-false-e70f9186-0305-4821-9e2c-1a757c8724bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059991377s Aug 10 23:20:30.062: INFO: Pod "alpine-nnp-false-e70f9186-0305-4821-9e2c-1a757c8724bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0644235s Aug 10 23:20:32.066: INFO: Pod "alpine-nnp-false-e70f9186-0305-4821-9e2c-1a757c8724bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068698305s Aug 10 23:20:32.066: INFO: Pod "alpine-nnp-false-e70f9186-0305-4821-9e2c-1a757c8724bd" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:20:32.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3256" for this suite. • [SLOW TEST:8.344 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":1,"skipped":60,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:20:32.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-8c38f216-23e0-469d-8678-ff324a38fce7 STEP: Creating a pod to test consume configMaps Aug 10 23:20:32.168: INFO: Waiting up to 5m0s for pod "pod-configmaps-af13bd91-a2e2-4f5b-b2b9-43c4d23a0b5e" in namespace "configmap-4489" to be "Succeeded or Failed" Aug 10 23:20:32.184: INFO: Pod "pod-configmaps-af13bd91-a2e2-4f5b-b2b9-43c4d23a0b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.272146ms Aug 10 23:20:34.507: INFO: Pod "pod-configmaps-af13bd91-a2e2-4f5b-b2b9-43c4d23a0b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339196845s Aug 10 23:20:36.602: INFO: Pod "pod-configmaps-af13bd91-a2e2-4f5b-b2b9-43c4d23a0b5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.434175902s STEP: Saw pod success Aug 10 23:20:36.602: INFO: Pod "pod-configmaps-af13bd91-a2e2-4f5b-b2b9-43c4d23a0b5e" satisfied condition "Succeeded or Failed" Aug 10 23:20:36.605: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-af13bd91-a2e2-4f5b-b2b9-43c4d23a0b5e container configmap-volume-test: STEP: delete the pod Aug 10 23:20:36.776: INFO: Waiting for pod pod-configmaps-af13bd91-a2e2-4f5b-b2b9-43c4d23a0b5e to disappear Aug 10 23:20:36.828: INFO: Pod pod-configmaps-af13bd91-a2e2-4f5b-b2b9-43c4d23a0b5e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:20:36.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4489" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":2,"skipped":81,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:20:36.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 10 23:20:37.074: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30818222-c802-4154-91e0-b00d109adccc" in namespace "projected-2506" to be "Succeeded or Failed" Aug 10 23:20:37.103: INFO: Pod "downwardapi-volume-30818222-c802-4154-91e0-b00d109adccc": Phase="Pending", Reason="", readiness=false. Elapsed: 28.580833ms Aug 10 23:20:39.110: INFO: Pod "downwardapi-volume-30818222-c802-4154-91e0-b00d109adccc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035941104s Aug 10 23:20:41.129: INFO: Pod "downwardapi-volume-30818222-c802-4154-91e0-b00d109adccc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05472185s STEP: Saw pod success Aug 10 23:20:41.129: INFO: Pod "downwardapi-volume-30818222-c802-4154-91e0-b00d109adccc" satisfied condition "Succeeded or Failed" Aug 10 23:20:41.132: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-30818222-c802-4154-91e0-b00d109adccc container client-container: STEP: delete the pod Aug 10 23:20:41.169: INFO: Waiting for pod downwardapi-volume-30818222-c802-4154-91e0-b00d109adccc to disappear Aug 10 23:20:41.197: INFO: Pod downwardapi-volume-30818222-c802-4154-91e0-b00d109adccc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:20:41.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2506" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":3,"skipped":100,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:20:41.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 10 23:20:41.301: INFO: Waiting up to 5m0s for pod "downward-api-11704536-502d-43bc-b320-a93ae6a0d872" in namespace "downward-api-5675" to be "Succeeded or Failed" Aug 10 23:20:41.311: INFO: Pod "downward-api-11704536-502d-43bc-b320-a93ae6a0d872": Phase="Pending", Reason="", readiness=false. Elapsed: 9.351024ms Aug 10 23:20:43.453: INFO: Pod "downward-api-11704536-502d-43bc-b320-a93ae6a0d872": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152229963s Aug 10 23:20:45.457: INFO: Pod "downward-api-11704536-502d-43bc-b320-a93ae6a0d872": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156138449s STEP: Saw pod success Aug 10 23:20:45.457: INFO: Pod "downward-api-11704536-502d-43bc-b320-a93ae6a0d872" satisfied condition "Succeeded or Failed" Aug 10 23:20:45.460: INFO: Trying to get logs from node latest-worker2 pod downward-api-11704536-502d-43bc-b320-a93ae6a0d872 container dapi-container: STEP: delete the pod Aug 10 23:20:45.884: INFO: Waiting for pod downward-api-11704536-502d-43bc-b320-a93ae6a0d872 to disappear Aug 10 23:20:45.902: INFO: Pod downward-api-11704536-502d-43bc-b320-a93ae6a0d872 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:20:45.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5675" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":303,"completed":4,"skipped":109,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:20:45.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Aug 10 23:20:46.044: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:20:46.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9868" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":303,"completed":5,"skipped":125,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:20:46.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 23:20:46.710: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 23:20:48.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732698446, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732698446, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732698446, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732698446, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 10 23:20:50.782: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732698446, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732698446, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732698446, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732698446, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 23:20:53.831: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:20:53.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4527" for this suite. STEP: Destroying namespace "webhook-4527-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.044 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":303,"completed":6,"skipped":131,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:20:54.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-a6626753-d249-45e6-b6ea-8b4a8448407e STEP: Creating a pod to test consume secrets Aug 10 23:20:54.196: INFO: Waiting up to 5m0s for pod "pod-secrets-78605108-0ac2-414e-88ec-a9c2312b4a6e" in namespace "secrets-1967" to be "Succeeded or Failed" Aug 10 23:20:54.219: INFO: Pod "pod-secrets-78605108-0ac2-414e-88ec-a9c2312b4a6e": Phase="Pending", Reason="", readiness=false. Elapsed: 23.67863ms Aug 10 23:20:56.223: INFO: Pod "pod-secrets-78605108-0ac2-414e-88ec-a9c2312b4a6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02723479s Aug 10 23:20:58.226: INFO: Pod "pod-secrets-78605108-0ac2-414e-88ec-a9c2312b4a6e": Phase="Running", Reason="", readiness=true. Elapsed: 4.030670535s Aug 10 23:21:00.230: INFO: Pod "pod-secrets-78605108-0ac2-414e-88ec-a9c2312b4a6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034845854s STEP: Saw pod success Aug 10 23:21:00.230: INFO: Pod "pod-secrets-78605108-0ac2-414e-88ec-a9c2312b4a6e" satisfied condition "Succeeded or Failed" Aug 10 23:21:00.233: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-78605108-0ac2-414e-88ec-a9c2312b4a6e container secret-volume-test: STEP: delete the pod Aug 10 23:21:00.268: INFO: Waiting for pod pod-secrets-78605108-0ac2-414e-88ec-a9c2312b4a6e to disappear Aug 10 23:21:00.303: INFO: Pod pod-secrets-78605108-0ac2-414e-88ec-a9c2312b4a6e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:21:00.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1967" for this suite. • [SLOW TEST:6.206 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":7,"skipped":134,"failed":0} [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:21:00.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Aug 10 23:21:04.907: INFO: Successfully updated pod "adopt-release-5n24t" STEP: Checking that the Job readopts the Pod Aug 10 23:21:04.907: INFO: Waiting up to 15m0s for pod "adopt-release-5n24t" in namespace "job-6287" to be "adopted" Aug 10 23:21:04.914: INFO: Pod "adopt-release-5n24t": Phase="Running", Reason="", readiness=true. Elapsed: 6.876588ms Aug 10 23:21:06.918: INFO: Pod "adopt-release-5n24t": Phase="Running", Reason="", readiness=true. Elapsed: 2.011235315s Aug 10 23:21:06.918: INFO: Pod "adopt-release-5n24t" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Aug 10 23:21:07.428: INFO: Successfully updated pod "adopt-release-5n24t" STEP: Checking that the Job releases the Pod Aug 10 23:21:07.428: INFO: Waiting up to 15m0s for pod "adopt-release-5n24t" in namespace "job-6287" to be "released" Aug 10 23:21:07.455: INFO: Pod "adopt-release-5n24t": Phase="Running", Reason="", readiness=true. Elapsed: 26.8498ms Aug 10 23:21:09.457: INFO: Pod "adopt-release-5n24t": Phase="Running", Reason="", readiness=true. Elapsed: 2.029790524s Aug 10 23:21:09.457: INFO: Pod "adopt-release-5n24t" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:21:09.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6287" for this suite. • [SLOW TEST:9.142 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":303,"completed":8,"skipped":134,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:21:09.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-8065 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8065 to expose endpoints map[] Aug 10 23:21:09.875: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Aug 10 23:21:10.902: INFO: successfully validated that service multi-endpoint-test in namespace services-8065 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-8065 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8065 to expose endpoints map[pod1:[100]] Aug 10 23:21:13.964: INFO: successfully validated that service multi-endpoint-test in namespace services-8065 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-8065 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8065 to expose endpoints map[pod1:[100] pod2:[101]] Aug 10 23:21:18.042: INFO: successfully validated that service multi-endpoint-test in namespace services-8065 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-8065 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8065 to expose endpoints map[pod2:[101]] Aug 10 23:21:18.098: INFO: successfully validated that service multi-endpoint-test in namespace services-8065 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-8065 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8065 to expose endpoints map[] Aug 10 23:21:19.663: INFO: successfully validated that service multi-endpoint-test in namespace services-8065 exposes endpoints map[] [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:21:19.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8065" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:10.347 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":303,"completed":9,"skipped":181,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:21:19.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-4584360b-0897-4a72-a0d9-7d14d06a154c [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:21:19.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9021" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":303,"completed":10,"skipped":191,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:21:20.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 10 23:21:20.321: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ceddff9a-d4f9-4765-b2a9-b5453460adbe" in namespace "projected-1107" to be "Succeeded or Failed" Aug 10 23:21:20.325: INFO: Pod "downwardapi-volume-ceddff9a-d4f9-4765-b2a9-b5453460adbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187847ms Aug 10 23:21:22.502: INFO: Pod "downwardapi-volume-ceddff9a-d4f9-4765-b2a9-b5453460adbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180807644s Aug 10 23:21:24.556: INFO: Pod "downwardapi-volume-ceddff9a-d4f9-4765-b2a9-b5453460adbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234558108s Aug 10 23:21:26.560: INFO: Pod "downwardapi-volume-ceddff9a-d4f9-4765-b2a9-b5453460adbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.238961177s STEP: Saw pod success Aug 10 23:21:26.560: INFO: Pod "downwardapi-volume-ceddff9a-d4f9-4765-b2a9-b5453460adbe" satisfied condition "Succeeded or Failed" Aug 10 23:21:26.563: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ceddff9a-d4f9-4765-b2a9-b5453460adbe container client-container: STEP: delete the pod Aug 10 23:21:26.651: INFO: Waiting for pod downwardapi-volume-ceddff9a-d4f9-4765-b2a9-b5453460adbe to disappear Aug 10 23:21:26.655: INFO: Pod downwardapi-volume-ceddff9a-d4f9-4765-b2a9-b5453460adbe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:21:26.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1107" for this suite. • [SLOW TEST:6.551 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":11,"skipped":227,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:21:26.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 23:21:27.274: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 23:21:29.300: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732698487, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732698487, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732698487, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732698487, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 23:21:32.382: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:21:32.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9059" for this suite. STEP: Destroying namespace "webhook-9059-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.800 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":303,"completed":12,"skipped":238,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:21:32.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:21:32.583: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Aug 10 23:21:34.722: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:21:35.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3991" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":303,"completed":13,"skipped":243,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:21:35.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-2e64517e-5137-4aff-8177-b498d985dd78 STEP: Creating a pod to test consume configMaps Aug 10 23:21:36.440: INFO: Waiting up to 5m0s for pod "pod-configmaps-54f0bfeb-fdb0-481d-9efa-eede6e87dced" in namespace "configmap-6479" to be "Succeeded or Failed" Aug 10 23:21:36.472: INFO: Pod "pod-configmaps-54f0bfeb-fdb0-481d-9efa-eede6e87dced": Phase="Pending", Reason="", readiness=false. Elapsed: 32.640996ms Aug 10 23:21:38.482: INFO: Pod "pod-configmaps-54f0bfeb-fdb0-481d-9efa-eede6e87dced": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042029509s Aug 10 23:21:40.502: INFO: Pod "pod-configmaps-54f0bfeb-fdb0-481d-9efa-eede6e87dced": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062383005s STEP: Saw pod success Aug 10 23:21:40.502: INFO: Pod "pod-configmaps-54f0bfeb-fdb0-481d-9efa-eede6e87dced" satisfied condition "Succeeded or Failed" Aug 10 23:21:40.505: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-54f0bfeb-fdb0-481d-9efa-eede6e87dced container configmap-volume-test: STEP: delete the pod Aug 10 23:21:40.540: INFO: Waiting for pod pod-configmaps-54f0bfeb-fdb0-481d-9efa-eede6e87dced to disappear Aug 10 23:21:40.553: INFO: Pod pod-configmaps-54f0bfeb-fdb0-481d-9efa-eede6e87dced no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:21:40.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6479" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":14,"skipped":252,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:21:40.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 10 23:21:40.956: INFO: Waiting up to 1m0s for all nodes to be ready Aug 10 23:22:40.980: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Aug 10 23:22:41.083: INFO: Created pod: pod0-sched-preemption-low-priority Aug 10 23:22:41.234: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:23:07.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5949" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:87.109 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":303,"completed":15,"skipped":257,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:23:07.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-43bfe47c-dc4c-4aaf-b22a-50367ac97812 STEP: Creating secret with name secret-projected-all-test-volume-018a8fd6-bfe9-4798-bcb9-1cabe3b1d71a STEP: Creating a pod to test Check all projections for projected volume plugin Aug 10 23:23:07.849: INFO: Waiting up to 5m0s for pod "projected-volume-e0806e18-ee18-480d-b37f-491157185ea4" in namespace "projected-6564" to be "Succeeded or Failed" Aug 10 23:23:07.856: INFO: Pod "projected-volume-e0806e18-ee18-480d-b37f-491157185ea4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.427815ms Aug 10 23:23:09.861: INFO: Pod "projected-volume-e0806e18-ee18-480d-b37f-491157185ea4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011997093s Aug 10 23:23:11.865: INFO: Pod "projected-volume-e0806e18-ee18-480d-b37f-491157185ea4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016673659s STEP: Saw pod success Aug 10 23:23:11.865: INFO: Pod "projected-volume-e0806e18-ee18-480d-b37f-491157185ea4" satisfied condition "Succeeded or Failed" Aug 10 23:23:11.868: INFO: Trying to get logs from node latest-worker2 pod projected-volume-e0806e18-ee18-480d-b37f-491157185ea4 container projected-all-volume-test: STEP: delete the pod Aug 10 23:23:12.032: INFO: Waiting for pod projected-volume-e0806e18-ee18-480d-b37f-491157185ea4 to disappear Aug 10 23:23:12.069: INFO: Pod projected-volume-e0806e18-ee18-480d-b37f-491157185ea4 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:23:12.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6564" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":303,"completed":16,"skipped":269,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:23:12.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:24:12.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8035" for this suite. • [SLOW TEST:60.093 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":303,"completed":17,"skipped":299,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:24:12.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Aug 10 23:24:12.321: INFO: >>> kubeConfig: /root/.kube/config Aug 10 23:24:15.363: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:24:28.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2119" for this suite. • [SLOW TEST:15.771 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":303,"completed":18,"skipped":308,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:24:28.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 10 23:24:36.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 10 23:24:36.374: INFO: Pod pod-with-poststart-exec-hook still exists Aug 10 23:24:38.374: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 10 23:24:38.378: INFO: Pod pod-with-poststart-exec-hook still exists Aug 10 23:24:40.374: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 10 23:24:40.378: INFO: Pod pod-with-poststart-exec-hook still exists Aug 10 23:24:42.374: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 10 23:24:42.378: INFO: Pod pod-with-poststart-exec-hook still exists Aug 10 23:24:44.374: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 10 23:24:44.377: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:24:44.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8600" for this suite. • [SLOW TEST:16.356 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":303,"completed":19,"skipped":336,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:24:44.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-784aa0f5-cd94-4021-a3d9-9bab01c134fa STEP: Creating a pod to test consume secrets Aug 10 23:24:44.513: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-54b6670e-6736-4cfe-a68b-b7d3cb61ef2b" in namespace "projected-1883" to be "Succeeded or Failed" Aug 10 23:24:44.526: INFO: Pod "pod-projected-secrets-54b6670e-6736-4cfe-a68b-b7d3cb61ef2b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.004533ms Aug 10 23:24:46.530: INFO: Pod "pod-projected-secrets-54b6670e-6736-4cfe-a68b-b7d3cb61ef2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01706352s Aug 10 23:24:48.534: INFO: Pod "pod-projected-secrets-54b6670e-6736-4cfe-a68b-b7d3cb61ef2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020970717s STEP: Saw pod success Aug 10 23:24:48.534: INFO: Pod "pod-projected-secrets-54b6670e-6736-4cfe-a68b-b7d3cb61ef2b" satisfied condition "Succeeded or Failed" Aug 10 23:24:48.536: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-54b6670e-6736-4cfe-a68b-b7d3cb61ef2b container projected-secret-volume-test: STEP: delete the pod Aug 10 23:24:48.566: INFO: Waiting for pod pod-projected-secrets-54b6670e-6736-4cfe-a68b-b7d3cb61ef2b to disappear Aug 10 23:24:48.577: INFO: Pod pod-projected-secrets-54b6670e-6736-4cfe-a68b-b7d3cb61ef2b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:24:48.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1883" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":20,"skipped":339,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:24:48.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 10 23:24:48.687: INFO: Waiting up to 5m0s for pod "pod-25dd6e7a-c678-477f-a58a-1f0b729eca6a" in namespace "emptydir-5506" to be "Succeeded or Failed" Aug 10 23:24:48.706: INFO: Pod "pod-25dd6e7a-c678-477f-a58a-1f0b729eca6a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.357324ms Aug 10 23:24:50.710: INFO: Pod "pod-25dd6e7a-c678-477f-a58a-1f0b729eca6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023548771s Aug 10 23:24:52.715: INFO: Pod "pod-25dd6e7a-c678-477f-a58a-1f0b729eca6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027695093s Aug 10 23:24:54.719: INFO: Pod "pod-25dd6e7a-c678-477f-a58a-1f0b729eca6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032151685s STEP: Saw pod success Aug 10 23:24:54.719: INFO: Pod "pod-25dd6e7a-c678-477f-a58a-1f0b729eca6a" satisfied condition "Succeeded or Failed" Aug 10 23:24:54.723: INFO: Trying to get logs from node latest-worker2 pod pod-25dd6e7a-c678-477f-a58a-1f0b729eca6a container test-container: STEP: delete the pod Aug 10 23:24:54.756: INFO: Waiting for pod pod-25dd6e7a-c678-477f-a58a-1f0b729eca6a to disappear Aug 10 23:24:54.767: INFO: Pod pod-25dd6e7a-c678-477f-a58a-1f0b729eca6a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:24:54.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5506" for this suite. • [SLOW TEST:6.188 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":21,"skipped":360,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:24:54.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:25:11.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2963" for this suite. • [SLOW TEST:17.171 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":303,"completed":22,"skipped":380,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:25:11.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-67b20884-480f-4b9f-bd58-3cdc86adcbd1 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:25:18.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1495" for this suite. • [SLOW TEST:6.144 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":23,"skipped":414,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:25:18.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6031 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-6031 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6031 Aug 10 23:25:18.209: INFO: Found 0 stateful pods, waiting for 1 Aug 10 23:25:28.213: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 10 23:25:28.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 10 23:25:31.189: INFO: stderr: "I0810 23:25:31.039673 29 log.go:181] (0xc0005940b0) (0xc0007aaf00) Create stream\nI0810 23:25:31.039753 29 log.go:181] (0xc0005940b0) (0xc0007aaf00) Stream added, broadcasting: 1\nI0810 23:25:31.043805 29 log.go:181] (0xc0005940b0) Reply frame received for 1\nI0810 23:25:31.043873 29 log.go:181] (0xc0005940b0) (0xc00090f720) Create stream\nI0810 23:25:31.043897 29 log.go:181] (0xc0005940b0) (0xc00090f720) Stream added, broadcasting: 3\nI0810 23:25:31.044827 29 log.go:181] (0xc0005940b0) Reply frame received for 3\nI0810 23:25:31.044863 29 log.go:181] (0xc0005940b0) (0xc000796820) Create stream\nI0810 23:25:31.044880 29 log.go:181] (0xc0005940b0) (0xc000796820) Stream added, broadcasting: 5\nI0810 23:25:31.045774 29 log.go:181] (0xc0005940b0) Reply frame received for 5\nI0810 23:25:31.145548 29 log.go:181] (0xc0005940b0) Data frame received for 5\nI0810 23:25:31.145580 29 log.go:181] (0xc000796820) (5) Data frame handling\nI0810 23:25:31.145592 29 log.go:181] (0xc000796820) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0810 23:25:31.181574 29 log.go:181] (0xc0005940b0) Data frame received for 5\nI0810 23:25:31.181612 29 log.go:181] (0xc0005940b0) Data frame received for 3\nI0810 23:25:31.181666 29 log.go:181] (0xc00090f720) (3) Data frame handling\nI0810 23:25:31.181684 29 log.go:181] (0xc00090f720) (3) Data frame sent\nI0810 23:25:31.181714 29 log.go:181] (0xc000796820) (5) Data frame handling\nI0810 23:25:31.181754 29 log.go:181] (0xc0005940b0) Data frame received for 3\nI0810 23:25:31.181768 29 log.go:181] (0xc00090f720) (3) Data frame handling\nI0810 23:25:31.185435 29 log.go:181] (0xc0005940b0) Data frame received for 1\nI0810 23:25:31.185455 29 log.go:181] (0xc0007aaf00) (1) Data frame handling\nI0810 23:25:31.185462 29 log.go:181] (0xc0007aaf00) (1) Data frame sent\nI0810 23:25:31.185470 29 log.go:181] (0xc0005940b0) (0xc0007aaf00) Stream removed, broadcasting: 1\nI0810 23:25:31.185755 29 log.go:181] (0xc0005940b0) (0xc0007aaf00) Stream removed, broadcasting: 1\nI0810 23:25:31.185769 29 log.go:181] (0xc0005940b0) (0xc00090f720) Stream removed, broadcasting: 3\nI0810 23:25:31.185775 29 log.go:181] (0xc0005940b0) (0xc000796820) Stream removed, broadcasting: 5\n" Aug 10 23:25:31.189: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 10 23:25:31.189: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 10 23:25:31.193: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 10 23:25:41.232: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 10 23:25:41.232: INFO: Waiting for statefulset status.replicas updated to 0 Aug 10 23:25:41.291: INFO: POD NODE PHASE GRACE CONDITIONS Aug 10 23:25:41.291: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC }] Aug 10 23:25:41.291: INFO: Aug 10 23:25:41.291: INFO: StatefulSet ss has not reached scale 3, at 1 Aug 10 23:25:42.548: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991165213s Aug 10 23:25:43.675: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.73426921s Aug 10 23:25:44.836: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.607626878s Aug 10 23:25:45.840: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.446627762s Aug 10 23:25:46.866: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.442113195s Aug 10 23:25:47.872: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.416119357s Aug 10 23:25:48.878: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.41012234s Aug 10 23:25:49.883: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.404843475s Aug 10 23:25:50.888: INFO: Verifying statefulset ss doesn't scale past 3 for another 399.731762ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6031 Aug 10 23:25:51.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:25:52.133: INFO: stderr: "I0810 23:25:52.039091 47 log.go:181] (0xc000d1a420) (0xc000c6d7c0) Create stream\nI0810 23:25:52.039145 47 log.go:181] (0xc000d1a420) (0xc000c6d7c0) Stream added, broadcasting: 1\nI0810 23:25:52.043607 47 log.go:181] (0xc000d1a420) Reply frame received for 1\nI0810 23:25:52.043639 47 log.go:181] (0xc000d1a420) (0xc000b043c0) Create stream\nI0810 23:25:52.043648 47 log.go:181] (0xc000d1a420) (0xc000b043c0) Stream added, broadcasting: 3\nI0810 23:25:52.044640 47 log.go:181] (0xc000d1a420) Reply frame received for 3\nI0810 23:25:52.044670 47 log.go:181] (0xc000d1a420) (0xc000454b40) Create stream\nI0810 23:25:52.044679 47 log.go:181] (0xc000d1a420) (0xc000454b40) Stream added, broadcasting: 5\nI0810 23:25:52.045647 47 log.go:181] (0xc000d1a420) Reply frame received for 5\nI0810 23:25:52.125988 47 log.go:181] (0xc000d1a420) Data frame received for 3\nI0810 23:25:52.126023 47 log.go:181] (0xc000b043c0) (3) Data frame handling\nI0810 23:25:52.126036 47 log.go:181] (0xc000b043c0) (3) Data frame sent\nI0810 23:25:52.126043 47 log.go:181] (0xc000d1a420) Data frame received for 3\nI0810 23:25:52.126050 47 log.go:181] (0xc000b043c0) (3) Data frame handling\nI0810 23:25:52.126082 47 log.go:181] (0xc000d1a420) Data frame received for 5\nI0810 23:25:52.126096 47 log.go:181] (0xc000454b40) (5) Data frame handling\nI0810 23:25:52.126114 47 log.go:181] (0xc000454b40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0810 23:25:52.126191 47 log.go:181] (0xc000d1a420) Data frame received for 5\nI0810 23:25:52.126219 47 log.go:181] (0xc000454b40) (5) Data frame handling\nI0810 23:25:52.127915 47 log.go:181] (0xc000d1a420) Data frame received for 1\nI0810 23:25:52.128008 47 log.go:181] (0xc000c6d7c0) (1) Data frame handling\nI0810 23:25:52.128038 47 log.go:181] (0xc000c6d7c0) (1) Data frame sent\nI0810 23:25:52.128067 47 log.go:181] (0xc000d1a420) (0xc000c6d7c0) Stream removed, broadcasting: 1\nI0810 23:25:52.128090 47 log.go:181] (0xc000d1a420) Go away received\nI0810 23:25:52.128591 47 log.go:181] (0xc000d1a420) (0xc000c6d7c0) Stream removed, broadcasting: 1\nI0810 23:25:52.128630 47 log.go:181] (0xc000d1a420) (0xc000b043c0) Stream removed, broadcasting: 3\nI0810 23:25:52.128655 47 log.go:181] (0xc000d1a420) (0xc000454b40) Stream removed, broadcasting: 5\n" Aug 10 23:25:52.133: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 10 23:25:52.133: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 10 23:25:52.133: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:25:52.361: INFO: stderr: "I0810 23:25:52.270317 65 log.go:181] (0xc000548f20) (0xc000a426e0) Create stream\nI0810 23:25:52.270372 65 log.go:181] (0xc000548f20) (0xc000a426e0) Stream added, broadcasting: 1\nI0810 23:25:52.276965 65 log.go:181] (0xc000548f20) Reply frame received for 1\nI0810 23:25:52.277014 65 log.go:181] (0xc000548f20) (0xc000824640) Create stream\nI0810 23:25:52.277032 65 log.go:181] (0xc000548f20) (0xc000824640) Stream added, broadcasting: 3\nI0810 23:25:52.278122 65 log.go:181] (0xc000548f20) Reply frame received for 3\nI0810 23:25:52.278167 65 log.go:181] (0xc000548f20) (0xc000486b40) Create stream\nI0810 23:25:52.278181 65 log.go:181] (0xc000548f20) (0xc000486b40) Stream added, broadcasting: 5\nI0810 23:25:52.278906 65 log.go:181] (0xc000548f20) Reply frame received for 5\nI0810 23:25:52.353521 65 log.go:181] (0xc000548f20) Data frame received for 5\nI0810 23:25:52.353543 65 log.go:181] (0xc000486b40) (5) Data frame handling\nI0810 23:25:52.353553 65 log.go:181] (0xc000486b40) (5) Data frame sent\nI0810 23:25:52.353558 65 log.go:181] (0xc000548f20) Data frame received for 5\nI0810 23:25:52.353562 65 log.go:181] (0xc000486b40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0810 23:25:52.353579 65 log.go:181] (0xc000548f20) Data frame received for 3\nI0810 23:25:52.353585 65 log.go:181] (0xc000824640) (3) Data frame handling\nI0810 23:25:52.353591 65 log.go:181] (0xc000824640) (3) Data frame sent\nI0810 23:25:52.353596 65 log.go:181] (0xc000548f20) Data frame received for 3\nI0810 23:25:52.353600 65 log.go:181] (0xc000824640) (3) Data frame handling\nI0810 23:25:52.355455 65 log.go:181] (0xc000548f20) Data frame received for 1\nI0810 23:25:52.355477 65 log.go:181] (0xc000a426e0) (1) Data frame handling\nI0810 23:25:52.355491 65 log.go:181] (0xc000a426e0) (1) Data frame sent\nI0810 23:25:52.355505 65 log.go:181] (0xc000548f20) (0xc000a426e0) Stream removed, broadcasting: 1\nI0810 23:25:52.355522 65 log.go:181] (0xc000548f20) Go away received\nI0810 23:25:52.355892 65 log.go:181] (0xc000548f20) (0xc000a426e0) Stream removed, broadcasting: 1\nI0810 23:25:52.355915 65 log.go:181] (0xc000548f20) (0xc000824640) Stream removed, broadcasting: 3\nI0810 23:25:52.355922 65 log.go:181] (0xc000548f20) (0xc000486b40) Stream removed, broadcasting: 5\n" Aug 10 23:25:52.361: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 10 23:25:52.361: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 10 23:25:52.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:25:52.564: INFO: stderr: "I0810 23:25:52.486920 83 log.go:181] (0xc000c1c840) (0xc000afd360) Create stream\nI0810 23:25:52.486977 83 log.go:181] (0xc000c1c840) (0xc000afd360) Stream added, broadcasting: 1\nI0810 23:25:52.489722 83 log.go:181] (0xc000c1c840) Reply frame received for 1\nI0810 23:25:52.489757 83 log.go:181] (0xc000c1c840) (0xc000888b40) Create stream\nI0810 23:25:52.489780 83 log.go:181] (0xc000c1c840) (0xc000888b40) Stream added, broadcasting: 3\nI0810 23:25:52.490786 83 log.go:181] (0xc000c1c840) Reply frame received for 3\nI0810 23:25:52.490817 83 log.go:181] (0xc000c1c840) (0xc000b01180) Create stream\nI0810 23:25:52.490840 83 log.go:181] (0xc000c1c840) (0xc000b01180) Stream added, broadcasting: 5\nI0810 23:25:52.491850 83 log.go:181] (0xc000c1c840) Reply frame received for 5\nI0810 23:25:52.558888 83 log.go:181] (0xc000c1c840) Data frame received for 3\nI0810 23:25:52.558935 83 log.go:181] (0xc000888b40) (3) Data frame handling\nI0810 23:25:52.558950 83 log.go:181] (0xc000888b40) (3) Data frame sent\nI0810 23:25:52.558960 83 log.go:181] (0xc000c1c840) Data frame received for 3\nI0810 23:25:52.558970 83 log.go:181] (0xc000888b40) (3) Data frame handling\nI0810 23:25:52.558985 83 log.go:181] (0xc000c1c840) Data frame received for 5\nI0810 23:25:52.558992 83 log.go:181] (0xc000b01180) (5) Data frame handling\nI0810 23:25:52.558999 83 log.go:181] (0xc000b01180) (5) Data frame sent\nI0810 23:25:52.559007 83 log.go:181] (0xc000c1c840) Data frame received for 5\nI0810 23:25:52.559017 83 log.go:181] (0xc000b01180) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0810 23:25:52.560345 83 log.go:181] (0xc000c1c840) Data frame received for 1\nI0810 23:25:52.560374 83 log.go:181] (0xc000afd360) (1) Data frame handling\nI0810 23:25:52.560386 83 log.go:181] (0xc000afd360) (1) Data frame sent\nI0810 23:25:52.560396 83 log.go:181] (0xc000c1c840) (0xc000afd360) Stream removed, broadcasting: 1\nI0810 23:25:52.560427 83 log.go:181] (0xc000c1c840) Go away received\nI0810 23:25:52.560860 83 log.go:181] (0xc000c1c840) (0xc000afd360) Stream removed, broadcasting: 1\nI0810 23:25:52.560882 83 log.go:181] (0xc000c1c840) (0xc000888b40) Stream removed, broadcasting: 3\nI0810 23:25:52.560892 83 log.go:181] (0xc000c1c840) (0xc000b01180) Stream removed, broadcasting: 5\n" Aug 10 23:25:52.564: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 10 23:25:52.564: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 10 23:25:52.568: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Aug 10 23:26:02.572: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 10 23:26:02.572: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 10 23:26:02.572: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 10 23:26:02.575: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 10 23:26:02.803: INFO: stderr: "I0810 23:26:02.711015 101 log.go:181] (0xc000f420b0) (0xc000718960) Create stream\nI0810 23:26:02.711105 101 log.go:181] (0xc000f420b0) (0xc000718960) Stream added, broadcasting: 1\nI0810 23:26:02.713267 101 log.go:181] (0xc000f420b0) Reply frame received for 1\nI0810 23:26:02.713317 101 log.go:181] (0xc000f420b0) (0xc00014bae0) Create stream\nI0810 23:26:02.713330 101 log.go:181] (0xc000f420b0) (0xc00014bae0) Stream added, broadcasting: 3\nI0810 23:26:02.714316 101 log.go:181] (0xc000f420b0) Reply frame received for 3\nI0810 23:26:02.714468 101 log.go:181] (0xc000f420b0) (0xc00019dc20) Create stream\nI0810 23:26:02.714498 101 log.go:181] (0xc000f420b0) (0xc00019dc20) Stream added, broadcasting: 5\nI0810 23:26:02.715506 101 log.go:181] (0xc000f420b0) Reply frame received for 5\nI0810 23:26:02.794600 101 log.go:181] (0xc000f420b0) Data frame received for 3\nI0810 23:26:02.794642 101 log.go:181] (0xc00014bae0) (3) Data frame handling\nI0810 23:26:02.794662 101 log.go:181] (0xc00014bae0) (3) Data frame sent\nI0810 23:26:02.794676 101 log.go:181] (0xc000f420b0) Data frame received for 3\nI0810 23:26:02.794685 101 log.go:181] (0xc00014bae0) (3) Data frame handling\nI0810 23:26:02.794735 101 log.go:181] (0xc000f420b0) Data frame received for 5\nI0810 23:26:02.794763 101 log.go:181] (0xc00019dc20) (5) Data frame handling\nI0810 23:26:02.794791 101 log.go:181] (0xc00019dc20) (5) Data frame sent\nI0810 23:26:02.794805 101 log.go:181] (0xc000f420b0) Data frame received for 5\nI0810 23:26:02.794821 101 log.go:181] (0xc00019dc20) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0810 23:26:02.796185 101 log.go:181] (0xc000f420b0) Data frame received for 1\nI0810 23:26:02.796210 101 log.go:181] (0xc000718960) (1) Data frame handling\nI0810 23:26:02.796231 101 log.go:181] (0xc000718960) (1) Data frame sent\nI0810 23:26:02.796244 101 log.go:181] (0xc000f420b0) (0xc000718960) Stream removed, broadcasting: 1\nI0810 23:26:02.796259 101 log.go:181] (0xc000f420b0) Go away received\nI0810 23:26:02.796865 101 log.go:181] (0xc000f420b0) (0xc000718960) Stream removed, broadcasting: 1\nI0810 23:26:02.796894 101 log.go:181] (0xc000f420b0) (0xc00014bae0) Stream removed, broadcasting: 3\nI0810 23:26:02.796905 101 log.go:181] (0xc000f420b0) (0xc00019dc20) Stream removed, broadcasting: 5\n" Aug 10 23:26:02.803: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 10 23:26:02.803: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 10 23:26:02.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 10 23:26:03.032: INFO: stderr: "I0810 23:26:02.931715 119 log.go:181] (0xc000eb6dc0) (0xc0008285a0) Create stream\nI0810 23:26:02.931765 119 log.go:181] (0xc000eb6dc0) (0xc0008285a0) Stream added, broadcasting: 1\nI0810 23:26:02.935506 119 log.go:181] (0xc000eb6dc0) Reply frame received for 1\nI0810 23:26:02.935548 119 log.go:181] (0xc000eb6dc0) (0xc0003c6780) Create stream\nI0810 23:26:02.935559 119 log.go:181] (0xc000eb6dc0) (0xc0003c6780) Stream added, broadcasting: 3\nI0810 23:26:02.936436 119 log.go:181] (0xc000eb6dc0) Reply frame received for 3\nI0810 23:26:02.936475 119 log.go:181] (0xc000eb6dc0) (0xc0003c6d20) Create stream\nI0810 23:26:02.936491 119 log.go:181] (0xc000eb6dc0) (0xc0003c6d20) Stream added, broadcasting: 5\nI0810 23:26:02.937317 119 log.go:181] (0xc000eb6dc0) Reply frame received for 5\nI0810 23:26:03.001715 119 log.go:181] (0xc000eb6dc0) Data frame received for 5\nI0810 23:26:03.001754 119 log.go:181] (0xc0003c6d20) (5) Data frame handling\nI0810 23:26:03.001778 119 log.go:181] (0xc0003c6d20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0810 23:26:03.024111 119 log.go:181] (0xc000eb6dc0) Data frame received for 3\nI0810 23:26:03.024126 119 log.go:181] (0xc0003c6780) (3) Data frame handling\nI0810 23:26:03.024135 119 log.go:181] (0xc0003c6780) (3) Data frame sent\nI0810 23:26:03.024206 119 log.go:181] (0xc000eb6dc0) Data frame received for 3\nI0810 23:26:03.024223 119 log.go:181] (0xc0003c6780) (3) Data frame handling\nI0810 23:26:03.024870 119 log.go:181] (0xc000eb6dc0) Data frame received for 5\nI0810 23:26:03.024907 119 log.go:181] (0xc0003c6d20) (5) Data frame handling\nI0810 23:26:03.025991 119 log.go:181] (0xc000eb6dc0) Data frame received for 1\nI0810 23:26:03.026023 119 log.go:181] (0xc0008285a0) (1) Data frame handling\nI0810 23:26:03.026055 119 log.go:181] (0xc0008285a0) (1) Data frame sent\nI0810 23:26:03.026099 119 log.go:181] (0xc000eb6dc0) (0xc0008285a0) Stream removed, broadcasting: 1\nI0810 23:26:03.026137 119 log.go:181] (0xc000eb6dc0) Go away received\nI0810 23:26:03.026654 119 log.go:181] (0xc000eb6dc0) (0xc0008285a0) Stream removed, broadcasting: 1\nI0810 23:26:03.026681 119 log.go:181] (0xc000eb6dc0) (0xc0003c6780) Stream removed, broadcasting: 3\nI0810 23:26:03.026692 119 log.go:181] (0xc000eb6dc0) (0xc0003c6d20) Stream removed, broadcasting: 5\n" Aug 10 23:26:03.032: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 10 23:26:03.032: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 10 23:26:03.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 10 23:26:03.285: INFO: stderr: "I0810 23:26:03.161288 137 log.go:181] (0xc000ec8dc0) (0xc000b2b860) Create stream\nI0810 23:26:03.161342 137 log.go:181] (0xc000ec8dc0) (0xc000b2b860) Stream added, broadcasting: 1\nI0810 23:26:03.166880 137 log.go:181] (0xc000ec8dc0) Reply frame received for 1\nI0810 23:26:03.166928 137 log.go:181] (0xc000ec8dc0) (0xc00099e0a0) Create stream\nI0810 23:26:03.166956 137 log.go:181] (0xc000ec8dc0) (0xc00099e0a0) Stream added, broadcasting: 3\nI0810 23:26:03.167904 137 log.go:181] (0xc000ec8dc0) Reply frame received for 3\nI0810 23:26:03.167952 137 log.go:181] (0xc000ec8dc0) (0xc00083b180) Create stream\nI0810 23:26:03.167966 137 log.go:181] (0xc000ec8dc0) (0xc00083b180) Stream added, broadcasting: 5\nI0810 23:26:03.168814 137 log.go:181] (0xc000ec8dc0) Reply frame received for 5\nI0810 23:26:03.246426 137 log.go:181] (0xc000ec8dc0) Data frame received for 5\nI0810 23:26:03.246452 137 log.go:181] (0xc00083b180) (5) Data frame handling\nI0810 23:26:03.246474 137 log.go:181] (0xc00083b180) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0810 23:26:03.276239 137 log.go:181] (0xc000ec8dc0) Data frame received for 3\nI0810 23:26:03.276264 137 log.go:181] (0xc00099e0a0) (3) Data frame handling\nI0810 23:26:03.276286 137 log.go:181] (0xc00099e0a0) (3) Data frame sent\nI0810 23:26:03.276342 137 log.go:181] (0xc000ec8dc0) Data frame received for 5\nI0810 23:26:03.276367 137 log.go:181] (0xc00083b180) (5) Data frame handling\nI0810 23:26:03.276454 137 log.go:181] (0xc000ec8dc0) Data frame received for 3\nI0810 23:26:03.276468 137 log.go:181] (0xc00099e0a0) (3) Data frame handling\nI0810 23:26:03.278181 137 log.go:181] (0xc000ec8dc0) Data frame received for 1\nI0810 23:26:03.278227 137 log.go:181] (0xc000b2b860) (1) Data frame handling\nI0810 23:26:03.278258 137 log.go:181] (0xc000b2b860) (1) Data frame sent\nI0810 23:26:03.278278 137 log.go:181] (0xc000ec8dc0) (0xc000b2b860) Stream removed, broadcasting: 1\nI0810 23:26:03.278308 137 log.go:181] (0xc000ec8dc0) Go away received\nI0810 23:26:03.278728 137 log.go:181] (0xc000ec8dc0) (0xc000b2b860) Stream removed, broadcasting: 1\nI0810 23:26:03.278751 137 log.go:181] (0xc000ec8dc0) (0xc00099e0a0) Stream removed, broadcasting: 3\nI0810 23:26:03.278760 137 log.go:181] (0xc000ec8dc0) (0xc00083b180) Stream removed, broadcasting: 5\n" Aug 10 23:26:03.285: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 10 23:26:03.285: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 10 23:26:03.285: INFO: Waiting for statefulset status.replicas updated to 0 Aug 10 23:26:03.288: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Aug 10 23:26:13.295: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 10 23:26:13.295: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 10 23:26:13.296: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 10 23:26:13.309: INFO: POD NODE PHASE GRACE CONDITIONS Aug 10 23:26:13.309: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC }] Aug 10 23:26:13.309: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:41 +0000 UTC }] Aug 10 23:26:13.309: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:41 +0000 UTC }] Aug 10 23:26:13.309: INFO: Aug 10 23:26:13.309: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 10 23:26:14.314: INFO: POD NODE PHASE GRACE CONDITIONS Aug 10 23:26:14.314: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC }] Aug 10 23:26:14.314: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:41 +0000 UTC }] Aug 10 23:26:14.314: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:41 +0000 UTC }] Aug 10 23:26:14.314: INFO: Aug 10 23:26:14.314: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 10 23:26:15.471: INFO: POD NODE PHASE GRACE CONDITIONS Aug 10 23:26:15.471: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC }] Aug 10 23:26:15.471: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:41 +0000 UTC }] Aug 10 23:26:15.471: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:41 +0000 UTC }] Aug 10 23:26:15.471: INFO: Aug 10 23:26:15.471: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 10 23:26:16.476: INFO: POD NODE PHASE GRACE CONDITIONS Aug 10 23:26:16.476: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC }] Aug 10 23:26:16.476: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:41 +0000 UTC }] Aug 10 23:26:16.476: INFO: Aug 10 23:26:16.476: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 10 23:26:17.480: INFO: POD NODE PHASE GRACE CONDITIONS Aug 10 23:26:17.480: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC }] Aug 10 23:26:17.480: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:41 +0000 UTC }] Aug 10 23:26:17.480: INFO: Aug 10 23:26:17.480: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 10 23:26:18.484: INFO: POD NODE PHASE GRACE CONDITIONS Aug 10 23:26:18.484: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC }] Aug 10 23:26:18.484: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:41 +0000 UTC }] Aug 10 23:26:18.484: INFO: Aug 10 23:26:18.484: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 10 23:26:19.489: INFO: POD NODE PHASE GRACE CONDITIONS Aug 10 23:26:19.489: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC }] Aug 10 23:26:19.489: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:41 +0000 UTC }] Aug 10 23:26:19.489: INFO: Aug 10 23:26:19.489: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 10 23:26:20.494: INFO: POD NODE PHASE GRACE CONDITIONS Aug 10 23:26:20.494: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC }] Aug 10 23:26:20.494: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:41 +0000 UTC }] Aug 10 23:26:20.494: INFO: Aug 10 23:26:20.494: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 10 23:26:21.499: INFO: POD NODE PHASE GRACE CONDITIONS Aug 10 23:26:21.499: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC }] Aug 10 23:26:21.499: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:41 +0000 UTC }] Aug 10 23:26:21.499: INFO: Aug 10 23:26:21.499: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 10 23:26:22.504: INFO: POD NODE PHASE GRACE CONDITIONS Aug 10 23:26:22.504: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:18 +0000 UTC }] Aug 10 23:26:22.504: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:26:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 23:25:41 +0000 UTC }] Aug 10 23:26:22.504: INFO: Aug 10 23:26:22.504: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6031 Aug 10 23:26:23.510: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:26:23.656: INFO: rc: 1 Aug 10 23:26:23.656: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Aug 10 23:26:33.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:26:33.763: INFO: rc: 1 Aug 10 23:26:33.763: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:26:43.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:26:43.870: INFO: rc: 1 Aug 10 23:26:43.871: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:26:53.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:26:53.983: INFO: rc: 1 Aug 10 23:26:53.983: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:27:03.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:27:04.095: INFO: rc: 1 Aug 10 23:27:04.095: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:27:14.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:27:14.200: INFO: rc: 1 Aug 10 23:27:14.200: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:27:24.200: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:27:24.312: INFO: rc: 1 Aug 10 23:27:24.312: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:27:34.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:27:34.418: INFO: rc: 1 Aug 10 23:27:34.418: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:27:44.418: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:27:44.527: INFO: rc: 1 Aug 10 23:27:44.527: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:27:54.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:27:54.623: INFO: rc: 1 Aug 10 23:27:54.623: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:28:04.624: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:28:04.721: INFO: rc: 1 Aug 10 23:28:04.721: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:28:14.722: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:28:14.830: INFO: rc: 1 Aug 10 23:28:14.830: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:28:24.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:28:24.941: INFO: rc: 1 Aug 10 23:28:24.941: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:28:34.941: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:28:35.044: INFO: rc: 1 Aug 10 23:28:35.045: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:28:45.045: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:28:45.153: INFO: rc: 1 Aug 10 23:28:45.153: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:28:55.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:28:55.263: INFO: rc: 1 Aug 10 23:28:55.263: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:29:05.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:29:05.364: INFO: rc: 1 Aug 10 23:29:05.364: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:29:15.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:29:15.495: INFO: rc: 1 Aug 10 23:29:15.495: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:29:25.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:29:25.609: INFO: rc: 1 Aug 10 23:29:25.609: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:29:35.609: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:29:35.706: INFO: rc: 1 Aug 10 23:29:35.706: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:29:45.706: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:29:45.804: INFO: rc: 1 Aug 10 23:29:45.804: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:29:55.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:29:55.907: INFO: rc: 1 Aug 10 23:29:55.908: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:30:05.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:30:06.009: INFO: rc: 1 Aug 10 23:30:06.009: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:30:16.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:30:16.124: INFO: rc: 1 Aug 10 23:30:16.124: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:30:26.124: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:30:26.219: INFO: rc: 1 Aug 10 23:30:26.219: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:30:36.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:30:36.325: INFO: rc: 1 Aug 10 23:30:36.325: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:30:46.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:30:46.453: INFO: rc: 1 Aug 10 23:30:46.454: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:30:56.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:30:56.573: INFO: rc: 1 Aug 10 23:30:56.573: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:31:06.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:31:06.679: INFO: rc: 1 Aug 10 23:31:06.679: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:31:16.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:31:16.794: INFO: rc: 1 Aug 10 23:31:16.795: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 10 23:31:26.795: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 23:31:26.903: INFO: rc: 1 Aug 10 23:31:26.903: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Aug 10 23:31:26.903: INFO: Scaling statefulset ss to 0 Aug 10 23:31:26.926: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 10 23:31:26.928: INFO: Deleting all statefulset in ns statefulset-6031 Aug 10 23:31:26.933: INFO: Scaling statefulset ss to 0 Aug 10 23:31:26.940: INFO: Waiting for statefulset status.replicas updated to 0 Aug 10 23:31:26.942: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:31:26.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6031" for this suite. • [SLOW TEST:368.874 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":303,"completed":24,"skipped":431,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:31:26.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:31:31.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7709" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":303,"completed":25,"skipped":442,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:31:31.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:33:31.162: INFO: Deleting pod "var-expansion-4f808350-1957-4ee3-a10d-599b11342359" in namespace "var-expansion-4624" Aug 10 23:33:31.168: INFO: Wait up to 5m0s for pod "var-expansion-4f808350-1957-4ee3-a10d-599b11342359" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:33:33.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4624" for this suite. • [SLOW TEST:122.119 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":303,"completed":26,"skipped":443,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:33:33.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:33:49.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8484" for this suite. • [SLOW TEST:16.315 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":303,"completed":27,"skipped":484,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:33:49.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:34:00.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-64" for this suite. • [SLOW TEST:11.175 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":303,"completed":28,"skipped":484,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:34:00.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3212 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3212 STEP: Creating statefulset with conflicting port in namespace statefulset-3212 STEP: Waiting until pod test-pod will start running in namespace statefulset-3212 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3212 Aug 10 23:34:06.866: INFO: Observed stateful pod in namespace: statefulset-3212, name: ss-0, uid: c063438c-3a30-4931-9b98-e28b4d0c2510, status phase: Pending. Waiting for statefulset controller to delete. Aug 10 23:34:07.629: INFO: Observed stateful pod in namespace: statefulset-3212, name: ss-0, uid: c063438c-3a30-4931-9b98-e28b4d0c2510, status phase: Failed. Waiting for statefulset controller to delete. Aug 10 23:34:07.642: INFO: Observed stateful pod in namespace: statefulset-3212, name: ss-0, uid: c063438c-3a30-4931-9b98-e28b4d0c2510, status phase: Failed. Waiting for statefulset controller to delete. Aug 10 23:34:07.659: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3212 STEP: Removing pod with conflicting port in namespace statefulset-3212 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3212 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 10 23:34:13.822: INFO: Deleting all statefulset in ns statefulset-3212 Aug 10 23:34:13.825: INFO: Scaling statefulset ss to 0 Aug 10 23:34:23.886: INFO: Waiting for statefulset status.replicas updated to 0 Aug 10 23:34:23.932: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:34:23.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3212" for this suite. • [SLOW TEST:23.265 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":303,"completed":29,"skipped":497,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:34:23.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-cae610e9-a33a-4121-a2f8-e2cd55cb164c STEP: Creating a pod to test consume secrets Aug 10 23:34:24.072: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b73be4b4-2b6c-40d0-b573-d88c9ec9761f" in namespace "projected-2044" to be "Succeeded or Failed" Aug 10 23:34:24.091: INFO: Pod "pod-projected-secrets-b73be4b4-2b6c-40d0-b573-d88c9ec9761f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.267708ms Aug 10 23:34:26.095: INFO: Pod "pod-projected-secrets-b73be4b4-2b6c-40d0-b573-d88c9ec9761f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023448879s Aug 10 23:34:28.100: INFO: Pod "pod-projected-secrets-b73be4b4-2b6c-40d0-b573-d88c9ec9761f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028108961s STEP: Saw pod success Aug 10 23:34:28.100: INFO: Pod "pod-projected-secrets-b73be4b4-2b6c-40d0-b573-d88c9ec9761f" satisfied condition "Succeeded or Failed" Aug 10 23:34:28.103: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-b73be4b4-2b6c-40d0-b573-d88c9ec9761f container secret-volume-test: STEP: delete the pod Aug 10 23:34:28.145: INFO: Waiting for pod pod-projected-secrets-b73be4b4-2b6c-40d0-b573-d88c9ec9761f to disappear Aug 10 23:34:28.204: INFO: Pod pod-projected-secrets-b73be4b4-2b6c-40d0-b573-d88c9ec9761f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:34:28.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2044" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":30,"skipped":517,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:34:28.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Aug 10 23:34:28.452: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f -' Aug 10 23:34:28.804: INFO: stderr: "" Aug 10 23:34:28.804: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Aug 10 23:34:28.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config diff -f -' Aug 10 23:34:29.392: INFO: rc: 1 Aug 10 23:34:29.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete -f -' Aug 10 23:34:29.500: INFO: stderr: "" Aug 10 23:34:29.500: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:34:29.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6347" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":303,"completed":31,"skipped":520,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:34:29.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:34:29.624: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:34:30.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1778" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":303,"completed":32,"skipped":533,"failed":0} SS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:34:30.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Aug 10 23:34:30.802: INFO: created test-event-1 Aug 10 23:34:30.808: INFO: created test-event-2 Aug 10 23:34:30.814: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Aug 10 23:34:30.819: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Aug 10 23:34:30.833: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:34:30.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7801" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":303,"completed":33,"skipped":535,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:34:30.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:34:42.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9448" for this suite. • [SLOW TEST:11.453 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":303,"completed":34,"skipped":561,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:34:42.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 10 23:34:42.420: INFO: Waiting up to 5m0s for pod "pod-f388095e-cf26-49da-94cf-c29023d7d502" in namespace "emptydir-3139" to be "Succeeded or Failed" Aug 10 23:34:42.445: INFO: Pod "pod-f388095e-cf26-49da-94cf-c29023d7d502": Phase="Pending", Reason="", readiness=false. Elapsed: 24.861933ms Aug 10 23:34:44.448: INFO: Pod "pod-f388095e-cf26-49da-94cf-c29023d7d502": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027840875s Aug 10 23:34:46.502: INFO: Pod "pod-f388095e-cf26-49da-94cf-c29023d7d502": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081618052s Aug 10 23:34:48.506: INFO: Pod "pod-f388095e-cf26-49da-94cf-c29023d7d502": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.086513764s STEP: Saw pod success Aug 10 23:34:48.507: INFO: Pod "pod-f388095e-cf26-49da-94cf-c29023d7d502" satisfied condition "Succeeded or Failed" Aug 10 23:34:48.510: INFO: Trying to get logs from node latest-worker2 pod pod-f388095e-cf26-49da-94cf-c29023d7d502 container test-container: STEP: delete the pod Aug 10 23:34:48.547: INFO: Waiting for pod pod-f388095e-cf26-49da-94cf-c29023d7d502 to disappear Aug 10 23:34:48.567: INFO: Pod pod-f388095e-cf26-49da-94cf-c29023d7d502 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:34:48.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3139" for this suite. • [SLOW TEST:6.246 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":35,"skipped":577,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:34:48.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Aug 10 23:34:48.658: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Aug 10 23:34:48.665: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Aug 10 23:34:48.665: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Aug 10 23:34:48.709: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Aug 10 23:34:48.709: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Aug 10 23:34:48.769: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Aug 10 23:34:48.769: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Aug 10 23:34:56.042: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:34:56.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-2995" for this suite. • [SLOW TEST:7.565 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":303,"completed":36,"skipped":591,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:34:56.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 10 23:34:56.628: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:35:09.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2078" for this suite. • [SLOW TEST:13.174 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":303,"completed":37,"skipped":658,"failed":0} SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:35:09.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 10 23:35:09.526: INFO: PodSpec: initContainers in spec.initContainers Aug 10 23:36:00.372: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a4d0b8aa-9b27-4fdf-bd0c-d0f822bc8655", GenerateName:"", Namespace:"init-container-7904", SelfLink:"/api/v1/namespaces/init-container-7904/pods/pod-init-a4d0b8aa-9b27-4fdf-bd0c-d0f822bc8655", UID:"6bd15409-1358-4555-8505-cfa7bef44895", ResourceVersion:"6037109", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63732699309, loc:(*time.Location)(0x7e34b60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"526787513"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034ef5c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034ef5e0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034ef600), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034ef620)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-24dlv", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005cff300), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-24dlv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-24dlv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-24dlv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0045653c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0010d69a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004565450)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004565470)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004565478), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00456547c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003f2d720), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732699309, loc:(*time.Location)(0x7e34b60)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732699309, loc:(*time.Location)(0x7e34b60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732699309, loc:(*time.Location)(0x7e34b60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732699309, loc:(*time.Location)(0x7e34b60)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.12", PodIP:"10.244.2.216", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.216"}}, StartTime:(*v1.Time)(0xc0034ef640), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0034ef700), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0010d6af0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://7f56157077e7a7e9c400f9b62ae31c090cc3a078261343d2551aefedd0b4ba27", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0034ef760), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0034ef660), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0045654ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:36:00.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7904" for this suite. • [SLOW TEST:51.196 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":303,"completed":38,"skipped":662,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:36:00.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:36:00.774: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:36:04.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2913" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":303,"completed":39,"skipped":707,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:36:04.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:36:05.173: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 10 23:36:07.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3917 create -f -' Aug 10 23:36:10.576: INFO: stderr: "" Aug 10 23:36:10.576: INFO: stdout: "e2e-test-crd-publish-openapi-5030-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 10 23:36:10.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3917 delete e2e-test-crd-publish-openapi-5030-crds test-cr' Aug 10 23:36:10.697: INFO: stderr: "" Aug 10 23:36:10.697: INFO: stdout: "e2e-test-crd-publish-openapi-5030-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Aug 10 23:36:10.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3917 apply -f -' Aug 10 23:36:11.027: INFO: stderr: "" Aug 10 23:36:11.027: INFO: stdout: "e2e-test-crd-publish-openapi-5030-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 10 23:36:11.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3917 delete e2e-test-crd-publish-openapi-5030-crds test-cr' Aug 10 23:36:11.128: INFO: stderr: "" Aug 10 23:36:11.128: INFO: stdout: "e2e-test-crd-publish-openapi-5030-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 10 23:36:11.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5030-crds' Aug 10 23:36:11.413: INFO: stderr: "" Aug 10 23:36:11.413: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5030-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:36:14.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3917" for this suite. • [SLOW TEST:10.107 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":303,"completed":40,"skipped":734,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:36:14.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-21e396e2-ff5c-45ab-bfb0-b4594e5f0b3a STEP: Creating secret with name s-test-opt-upd-9f57dbc0-e4c0-439c-a79a-6cbb31b1dbf2 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-21e396e2-ff5c-45ab-bfb0-b4594e5f0b3a STEP: Updating secret s-test-opt-upd-9f57dbc0-e4c0-439c-a79a-6cbb31b1dbf2 STEP: Creating secret with name s-test-opt-create-5093282c-5819-4874-b867-5e421b84b8b2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:36:23.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-547" for this suite. • [SLOW TEST:8.254 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":41,"skipped":740,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:36:23.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 10 23:36:23.274: INFO: Waiting up to 5m0s for pod "downwardapi-volume-359f8877-6344-4ea6-b343-d81e4ba75641" in namespace "projected-3389" to be "Succeeded or Failed" Aug 10 23:36:23.295: INFO: Pod "downwardapi-volume-359f8877-6344-4ea6-b343-d81e4ba75641": Phase="Pending", Reason="", readiness=false. Elapsed: 20.428507ms Aug 10 23:36:25.299: INFO: Pod "downwardapi-volume-359f8877-6344-4ea6-b343-d81e4ba75641": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024470367s Aug 10 23:36:27.303: INFO: Pod "downwardapi-volume-359f8877-6344-4ea6-b343-d81e4ba75641": Phase="Running", Reason="", readiness=true. Elapsed: 4.028863068s Aug 10 23:36:29.309: INFO: Pod "downwardapi-volume-359f8877-6344-4ea6-b343-d81e4ba75641": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034363929s STEP: Saw pod success Aug 10 23:36:29.309: INFO: Pod "downwardapi-volume-359f8877-6344-4ea6-b343-d81e4ba75641" satisfied condition "Succeeded or Failed" Aug 10 23:36:29.312: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-359f8877-6344-4ea6-b343-d81e4ba75641 container client-container: STEP: delete the pod Aug 10 23:36:29.350: INFO: Waiting for pod downwardapi-volume-359f8877-6344-4ea6-b343-d81e4ba75641 to disappear Aug 10 23:36:29.358: INFO: Pod downwardapi-volume-359f8877-6344-4ea6-b343-d81e4ba75641 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:36:29.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3389" for this suite. • [SLOW TEST:6.419 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":42,"skipped":759,"failed":0} [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:36:29.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3793 Aug 10 23:36:36.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-3793 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Aug 10 23:36:36.282: INFO: stderr: "I0810 23:36:36.190050 860 log.go:181] (0xc000e1b4a0) (0xc000999900) Create stream\nI0810 23:36:36.190106 860 log.go:181] (0xc000e1b4a0) (0xc000999900) Stream added, broadcasting: 1\nI0810 23:36:36.196213 860 log.go:181] (0xc000e1b4a0) Reply frame received for 1\nI0810 23:36:36.196269 860 log.go:181] (0xc000e1b4a0) (0xc000456640) Create stream\nI0810 23:36:36.196288 860 log.go:181] (0xc000e1b4a0) (0xc000456640) Stream added, broadcasting: 3\nI0810 23:36:36.198103 860 log.go:181] (0xc000e1b4a0) Reply frame received for 3\nI0810 23:36:36.198144 860 log.go:181] (0xc000e1b4a0) (0xc000442280) Create stream\nI0810 23:36:36.198155 860 log.go:181] (0xc000e1b4a0) (0xc000442280) Stream added, broadcasting: 5\nI0810 23:36:36.199224 860 log.go:181] (0xc000e1b4a0) Reply frame received for 5\nI0810 23:36:36.269526 860 log.go:181] (0xc000e1b4a0) Data frame received for 5\nI0810 23:36:36.269559 860 log.go:181] (0xc000442280) (5) Data frame handling\nI0810 23:36:36.269589 860 log.go:181] (0xc000442280) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0810 23:36:36.275744 860 log.go:181] (0xc000e1b4a0) Data frame received for 3\nI0810 23:36:36.275799 860 log.go:181] (0xc000456640) (3) Data frame handling\nI0810 23:36:36.275840 860 log.go:181] (0xc000456640) (3) Data frame sent\nI0810 23:36:36.276069 860 log.go:181] (0xc000e1b4a0) Data frame received for 3\nI0810 23:36:36.276081 860 log.go:181] (0xc000456640) (3) Data frame handling\nI0810 23:36:36.276313 860 log.go:181] (0xc000e1b4a0) Data frame received for 5\nI0810 23:36:36.276340 860 log.go:181] (0xc000442280) (5) Data frame handling\nI0810 23:36:36.277804 860 log.go:181] (0xc000e1b4a0) Data frame received for 1\nI0810 23:36:36.277823 860 log.go:181] (0xc000999900) (1) Data frame handling\nI0810 23:36:36.277834 860 log.go:181] (0xc000999900) (1) Data frame sent\nI0810 23:36:36.277849 860 log.go:181] (0xc000e1b4a0) (0xc000999900) Stream removed, broadcasting: 1\nI0810 23:36:36.277862 860 log.go:181] (0xc000e1b4a0) Go away received\nI0810 23:36:36.278138 860 log.go:181] (0xc000e1b4a0) (0xc000999900) Stream removed, broadcasting: 1\nI0810 23:36:36.278153 860 log.go:181] (0xc000e1b4a0) (0xc000456640) Stream removed, broadcasting: 3\nI0810 23:36:36.278158 860 log.go:181] (0xc000e1b4a0) (0xc000442280) Stream removed, broadcasting: 5\n" Aug 10 23:36:36.282: INFO: stdout: "iptables" Aug 10 23:36:36.282: INFO: proxyMode: iptables Aug 10 23:36:36.289: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 23:36:36.314: INFO: Pod kube-proxy-mode-detector still exists Aug 10 23:36:38.314: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 23:36:38.319: INFO: Pod kube-proxy-mode-detector still exists Aug 10 23:36:40.314: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 23:36:40.318: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-3793 STEP: creating replication controller affinity-nodeport-timeout in namespace services-3793 I0810 23:36:40.389905 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-3793, replica count: 3 I0810 23:36:43.440318 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 23:36:46.440567 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 10 23:36:46.452: INFO: Creating new exec pod Aug 10 23:36:51.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-3793 execpod-affinityzxcs5 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Aug 10 23:36:51.760: INFO: stderr: "I0810 23:36:51.669173 878 log.go:181] (0xc000926bb0) (0xc00088c280) Create stream\nI0810 23:36:51.669223 878 log.go:181] (0xc000926bb0) (0xc00088c280) Stream added, broadcasting: 1\nI0810 23:36:51.670823 878 log.go:181] (0xc000926bb0) Reply frame received for 1\nI0810 23:36:51.670871 878 log.go:181] (0xc000926bb0) (0xc00088c780) Create stream\nI0810 23:36:51.670891 878 log.go:181] (0xc000926bb0) (0xc00088c780) Stream added, broadcasting: 3\nI0810 23:36:51.671667 878 log.go:181] (0xc000926bb0) Reply frame received for 3\nI0810 23:36:51.671719 878 log.go:181] (0xc000926bb0) (0xc0008683c0) Create stream\nI0810 23:36:51.671752 878 log.go:181] (0xc000926bb0) (0xc0008683c0) Stream added, broadcasting: 5\nI0810 23:36:51.672617 878 log.go:181] (0xc000926bb0) Reply frame received for 5\nI0810 23:36:51.752258 878 log.go:181] (0xc000926bb0) Data frame received for 5\nI0810 23:36:51.752292 878 log.go:181] (0xc0008683c0) (5) Data frame handling\nI0810 23:36:51.752316 878 log.go:181] (0xc0008683c0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0810 23:36:51.752566 878 log.go:181] (0xc000926bb0) Data frame received for 5\nI0810 23:36:51.752585 878 log.go:181] (0xc0008683c0) (5) Data frame handling\nI0810 23:36:51.752597 878 log.go:181] (0xc0008683c0) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0810 23:36:51.752853 878 log.go:181] (0xc000926bb0) Data frame received for 5\nI0810 23:36:51.752878 878 log.go:181] (0xc0008683c0) (5) Data frame handling\nI0810 23:36:51.753131 878 log.go:181] (0xc000926bb0) Data frame received for 3\nI0810 23:36:51.753142 878 log.go:181] (0xc00088c780) (3) Data frame handling\nI0810 23:36:51.754945 878 log.go:181] (0xc000926bb0) Data frame received for 1\nI0810 23:36:51.754970 878 log.go:181] (0xc00088c280) (1) Data frame handling\nI0810 23:36:51.754982 878 log.go:181] (0xc00088c280) (1) Data frame sent\nI0810 23:36:51.754990 878 log.go:181] (0xc000926bb0) (0xc00088c280) Stream removed, broadcasting: 1\nI0810 23:36:51.755196 878 log.go:181] (0xc000926bb0) Go away received\nI0810 23:36:51.755475 878 log.go:181] (0xc000926bb0) (0xc00088c280) Stream removed, broadcasting: 1\nI0810 23:36:51.755514 878 log.go:181] (0xc000926bb0) (0xc00088c780) Stream removed, broadcasting: 3\nI0810 23:36:51.755532 878 log.go:181] (0xc000926bb0) (0xc0008683c0) Stream removed, broadcasting: 5\n" Aug 10 23:36:51.760: INFO: stdout: "" Aug 10 23:36:51.760: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-3793 execpod-affinityzxcs5 -- /bin/sh -x -c nc -zv -t -w 2 10.96.222.104 80' Aug 10 23:36:51.978: INFO: stderr: "I0810 23:36:51.894680 896 log.go:181] (0xc00054b290) (0xc000b17a40) Create stream\nI0810 23:36:51.894731 896 log.go:181] (0xc00054b290) (0xc000b17a40) Stream added, broadcasting: 1\nI0810 23:36:51.900959 896 log.go:181] (0xc00054b290) Reply frame received for 1\nI0810 23:36:51.900996 896 log.go:181] (0xc00054b290) (0xc0009e28c0) Create stream\nI0810 23:36:51.901005 896 log.go:181] (0xc00054b290) (0xc0009e28c0) Stream added, broadcasting: 3\nI0810 23:36:51.901942 896 log.go:181] (0xc00054b290) Reply frame received for 3\nI0810 23:36:51.901980 896 log.go:181] (0xc00054b290) (0xc0009e2dc0) Create stream\nI0810 23:36:51.901990 896 log.go:181] (0xc00054b290) (0xc0009e2dc0) Stream added, broadcasting: 5\nI0810 23:36:51.902879 896 log.go:181] (0xc00054b290) Reply frame received for 5\nI0810 23:36:51.969869 896 log.go:181] (0xc00054b290) Data frame received for 3\nI0810 23:36:51.969901 896 log.go:181] (0xc0009e28c0) (3) Data frame handling\nI0810 23:36:51.969938 896 log.go:181] (0xc00054b290) Data frame received for 5\nI0810 23:36:51.969981 896 log.go:181] (0xc0009e2dc0) (5) Data frame handling\nI0810 23:36:51.970013 896 log.go:181] (0xc0009e2dc0) (5) Data frame sent\nI0810 23:36:51.970031 896 log.go:181] (0xc00054b290) Data frame received for 5\nI0810 23:36:51.970048 896 log.go:181] (0xc0009e2dc0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.222.104 80\nConnection to 10.96.222.104 80 port [tcp/http] succeeded!\nI0810 23:36:51.971000 896 log.go:181] (0xc00054b290) Data frame received for 1\nI0810 23:36:51.971020 896 log.go:181] (0xc000b17a40) (1) Data frame handling\nI0810 23:36:51.971031 896 log.go:181] (0xc000b17a40) (1) Data frame sent\nI0810 23:36:51.971184 896 log.go:181] (0xc00054b290) (0xc000b17a40) Stream removed, broadcasting: 1\nI0810 23:36:51.971228 896 log.go:181] (0xc00054b290) Go away received\nI0810 23:36:51.971716 896 log.go:181] (0xc00054b290) (0xc000b17a40) Stream removed, broadcasting: 1\nI0810 23:36:51.971739 896 log.go:181] (0xc00054b290) (0xc0009e28c0) Stream removed, broadcasting: 3\nI0810 23:36:51.971748 896 log.go:181] (0xc00054b290) (0xc0009e2dc0) Stream removed, broadcasting: 5\n" Aug 10 23:36:51.978: INFO: stdout: "" Aug 10 23:36:51.978: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-3793 execpod-affinityzxcs5 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32634' Aug 10 23:36:52.158: INFO: stderr: "I0810 23:36:52.095499 914 log.go:181] (0xc0006faf20) (0xc000928dc0) Create stream\nI0810 23:36:52.095555 914 log.go:181] (0xc0006faf20) (0xc000928dc0) Stream added, broadcasting: 1\nI0810 23:36:52.097709 914 log.go:181] (0xc0006faf20) Reply frame received for 1\nI0810 23:36:52.097734 914 log.go:181] (0xc0006faf20) (0xc0004c20a0) Create stream\nI0810 23:36:52.097746 914 log.go:181] (0xc0006faf20) (0xc0004c20a0) Stream added, broadcasting: 3\nI0810 23:36:52.098532 914 log.go:181] (0xc0006faf20) Reply frame received for 3\nI0810 23:36:52.098560 914 log.go:181] (0xc0006faf20) (0xc0004c2780) Create stream\nI0810 23:36:52.098569 914 log.go:181] (0xc0006faf20) (0xc0004c2780) Stream added, broadcasting: 5\nI0810 23:36:52.099343 914 log.go:181] (0xc0006faf20) Reply frame received for 5\nI0810 23:36:52.151689 914 log.go:181] (0xc0006faf20) Data frame received for 5\nI0810 23:36:52.151753 914 log.go:181] (0xc0006faf20) Data frame received for 3\nI0810 23:36:52.151799 914 log.go:181] (0xc0004c20a0) (3) Data frame handling\nI0810 23:36:52.151849 914 log.go:181] (0xc0004c2780) (5) Data frame handling\nI0810 23:36:52.151883 914 log.go:181] (0xc0004c2780) (5) Data frame sent\nI0810 23:36:52.151899 914 log.go:181] (0xc0006faf20) Data frame received for 5\nI0810 23:36:52.151908 914 log.go:181] (0xc0004c2780) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 32634\nConnection to 172.18.0.14 32634 port [tcp/32634] succeeded!\nI0810 23:36:52.153134 914 log.go:181] (0xc0006faf20) Data frame received for 1\nI0810 23:36:52.153165 914 log.go:181] (0xc000928dc0) (1) Data frame handling\nI0810 23:36:52.153186 914 log.go:181] (0xc000928dc0) (1) Data frame sent\nI0810 23:36:52.153205 914 log.go:181] (0xc0006faf20) (0xc000928dc0) Stream removed, broadcasting: 1\nI0810 23:36:52.153236 914 log.go:181] (0xc0006faf20) Go away received\nI0810 23:36:52.153748 914 log.go:181] (0xc0006faf20) (0xc000928dc0) Stream removed, broadcasting: 1\nI0810 23:36:52.153767 914 log.go:181] (0xc0006faf20) (0xc0004c20a0) Stream removed, broadcasting: 3\nI0810 23:36:52.153777 914 log.go:181] (0xc0006faf20) (0xc0004c2780) Stream removed, broadcasting: 5\n" Aug 10 23:36:52.158: INFO: stdout: "" Aug 10 23:36:52.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-3793 execpod-affinityzxcs5 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32634' Aug 10 23:36:52.367: INFO: stderr: "I0810 23:36:52.283265 931 log.go:181] (0xc0007bc000) (0xc000ac08c0) Create stream\nI0810 23:36:52.283373 931 log.go:181] (0xc0007bc000) (0xc000ac08c0) Stream added, broadcasting: 1\nI0810 23:36:52.285671 931 log.go:181] (0xc0007bc000) Reply frame received for 1\nI0810 23:36:52.285726 931 log.go:181] (0xc0007bc000) (0xc000ac1c20) Create stream\nI0810 23:36:52.285743 931 log.go:181] (0xc0007bc000) (0xc000ac1c20) Stream added, broadcasting: 3\nI0810 23:36:52.286535 931 log.go:181] (0xc0007bc000) Reply frame received for 3\nI0810 23:36:52.286560 931 log.go:181] (0xc0007bc000) (0xc000aba460) Create stream\nI0810 23:36:52.286569 931 log.go:181] (0xc0007bc000) (0xc000aba460) Stream added, broadcasting: 5\nI0810 23:36:52.287282 931 log.go:181] (0xc0007bc000) Reply frame received for 5\nI0810 23:36:52.359642 931 log.go:181] (0xc0007bc000) Data frame received for 3\nI0810 23:36:52.359683 931 log.go:181] (0xc000ac1c20) (3) Data frame handling\nI0810 23:36:52.359704 931 log.go:181] (0xc0007bc000) Data frame received for 5\nI0810 23:36:52.359714 931 log.go:181] (0xc000aba460) (5) Data frame handling\nI0810 23:36:52.359724 931 log.go:181] (0xc000aba460) (5) Data frame sent\nI0810 23:36:52.359732 931 log.go:181] (0xc0007bc000) Data frame received for 5\nI0810 23:36:52.359739 931 log.go:181] (0xc000aba460) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 32634\nConnection to 172.18.0.12 32634 port [tcp/32634] succeeded!\nI0810 23:36:52.361039 931 log.go:181] (0xc0007bc000) Data frame received for 1\nI0810 23:36:52.361055 931 log.go:181] (0xc000ac08c0) (1) Data frame handling\nI0810 23:36:52.361062 931 log.go:181] (0xc000ac08c0) (1) Data frame sent\nI0810 23:36:52.361069 931 log.go:181] (0xc0007bc000) (0xc000ac08c0) Stream removed, broadcasting: 1\nI0810 23:36:52.361084 931 log.go:181] (0xc0007bc000) Go away received\nI0810 23:36:52.361518 931 log.go:181] (0xc0007bc000) (0xc000ac08c0) Stream removed, broadcasting: 1\nI0810 23:36:52.361536 931 log.go:181] (0xc0007bc000) (0xc000ac1c20) Stream removed, broadcasting: 3\nI0810 23:36:52.361545 931 log.go:181] (0xc0007bc000) (0xc000aba460) Stream removed, broadcasting: 5\n" Aug 10 23:36:52.367: INFO: stdout: "" Aug 10 23:36:52.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-3793 execpod-affinityzxcs5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:32634/ ; done' Aug 10 23:36:52.682: INFO: stderr: "I0810 23:36:52.511264 949 log.go:181] (0xc000a0c000) (0xc0002fabe0) Create stream\nI0810 23:36:52.511317 949 log.go:181] (0xc000a0c000) (0xc0002fabe0) Stream added, broadcasting: 1\nI0810 23:36:52.515354 949 log.go:181] (0xc000a0c000) Reply frame received for 1\nI0810 23:36:52.515419 949 log.go:181] (0xc000a0c000) (0xc000280000) Create stream\nI0810 23:36:52.515442 949 log.go:181] (0xc000a0c000) (0xc000280000) Stream added, broadcasting: 3\nI0810 23:36:52.517428 949 log.go:181] (0xc000a0c000) Reply frame received for 3\nI0810 23:36:52.517463 949 log.go:181] (0xc000a0c000) (0xc00026c280) Create stream\nI0810 23:36:52.517473 949 log.go:181] (0xc000a0c000) (0xc00026c280) Stream added, broadcasting: 5\nI0810 23:36:52.518126 949 log.go:181] (0xc000a0c000) Reply frame received for 5\nI0810 23:36:52.578457 949 log.go:181] (0xc000a0c000) Data frame received for 5\nI0810 23:36:52.578484 949 log.go:181] (0xc00026c280) (5) Data frame handling\nI0810 23:36:52.578494 949 log.go:181] (0xc00026c280) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:36:52.578503 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.578533 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.578549 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.584328 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.584344 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.584352 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.585092 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.585134 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.585150 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.585171 949 log.go:181] (0xc000a0c000) Data frame received for 5\nI0810 23:36:52.585183 949 log.go:181] (0xc00026c280) (5) Data frame handling\nI0810 23:36:52.585196 949 log.go:181] (0xc00026c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:36:52.592168 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.592205 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.592230 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.592718 949 log.go:181] (0xc000a0c000) Data frame received for 5\nI0810 23:36:52.592832 949 log.go:181] (0xc00026c280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:36:52.592858 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.592900 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.592923 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.592949 949 log.go:181] (0xc00026c280) (5) Data frame sent\nI0810 23:36:52.599422 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.599460 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.599479 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.600071 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.600105 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.600132 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.600150 949 log.go:181] (0xc000a0c000) Data frame received for 5\nI0810 23:36:52.600165 949 log.go:181] (0xc00026c280) (5) Data frame handling\nI0810 23:36:52.600179 949 log.go:181] (0xc00026c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:36:52.606979 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.607011 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.607039 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.607521 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.607537 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.607542 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.607577 949 log.go:181] (0xc000a0c000) Data frame received for 5\nI0810 23:36:52.607606 949 log.go:181] (0xc00026c280) (5) Data frame handling\nI0810 23:36:52.607635 949 log.go:181] (0xc00026c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:36:52.613916 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.613935 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.613943 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.614657 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.614681 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.614690 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.614697 949 log.go:181] (0xc000a0c000) Data frame received for 5\nI0810 23:36:52.614703 949 log.go:181] (0xc00026c280) (5) Data frame handling\nI0810 23:36:52.614709 949 log.go:181] (0xc00026c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:36:52.619039 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.619064 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.619080 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.619762 949 log.go:181] (0xc000a0c000) Data frame received for 5\nI0810 23:36:52.619786 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.619812 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.619832 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.619861 949 log.go:181] (0xc00026c280) (5) Data frame handling\nI0810 23:36:52.619876 949 log.go:181] (0xc00026c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:36:52.624709 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.624803 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.624815 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.625364 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.625379 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.625391 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.625407 949 log.go:181] (0xc000a0c000) Data frame received for 5\nI0810 23:36:52.625427 949 log.go:181] (0xc00026c280) (5) Data frame handling\nI0810 23:36:52.625457 949 log.go:181] (0xc00026c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:36:52.630065 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.630097 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.630125 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.630574 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.630596 949 log.go:181] (0xc000a0c000) Data frame received for 5\nI0810 23:36:52.630627 949 log.go:181] (0xc00026c280) (5) Data frame handling\nI0810 23:36:52.630644 949 log.go:181] (0xc00026c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:36:52.630664 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.630674 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.636188 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.636210 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.636228 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.637139 949 log.go:181] (0xc000a0c000) Data frame received for 5\nI0810 23:36:52.637157 949 log.go:181] (0xc00026c280) (5) Data frame handling\nI0810 23:36:52.637173 949 log.go:181] (0xc00026c280) (5) Data frame sent\nI0810 23:36:52.637183 949 log.go:181] (0xc000a0c000) Data frame received for 5\nI0810 23:36:52.637192 949 log.go:181] (0xc00026c280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:36:52.637210 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.637226 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.637236 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.637251 949 log.go:181] (0xc00026c280) (5) Data frame sent\nI0810 23:36:52.641502 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.641532 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.641554 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.642164 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.642184 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.642196 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.642219 949 log.go:181] (0xc000a0c000) Data frame received for 5\nI0810 23:36:52.642228 949 log.go:181] (0xc00026c280) (5) Data frame handling\nI0810 23:36:52.642240 949 log.go:181] (0xc00026c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:36:52.646547 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.646561 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.646570 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.647323 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.647348 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.647361 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.647377 949 log.go:181] (0xc000a0c000) Data frame received for 5\nI0810 23:36:52.647393 949 log.go:181] (0xc00026c280) (5) Data frame handling\nI0810 23:36:52.647405 949 log.go:181] (0xc00026c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:36:52.652098 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.652113 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.652122 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.652965 949 log.go:181] (0xc000a0c000) Data frame received for 5\nI0810 23:36:52.652997 949 log.go:181] (0xc00026c280) (5) Data frame handling\nI0810 23:36:52.653014 949 log.go:181] (0xc00026c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:36:52.653046 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.653060 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.653070 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.657926 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.657943 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.657956 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.658464 949 log.go:181] (0xc000a0c000) Data frame received for 5\nI0810 23:36:52.658482 949 log.go:181] (0xc00026c280) (5) Data frame handling\nI0810 23:36:52.658510 949 log.go:181] (0xc00026c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:36:52.658537 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.658553 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.658569 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.663585 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.663620 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.663640 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.664524 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.664634 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.664807 949 log.go:181] (0xc000a0c000) Data frame received for 5\nI0810 23:36:52.664840 949 log.go:181] (0xc00026c280) (5) Data frame handling\nI0810 23:36:52.664851 949 log.go:181] (0xc00026c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:36:52.664865 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.668834 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.668860 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.668879 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.669448 949 log.go:181] (0xc000a0c000) Data frame received for 5\nI0810 23:36:52.669486 949 log.go:181] (0xc00026c280) (5) Data frame handling\nI0810 23:36:52.669503 949 log.go:181] (0xc00026c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:36:52.669522 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.669535 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.669546 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.674689 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.674710 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.674728 949 log.go:181] (0xc000280000) (3) Data frame sent\nI0810 23:36:52.675240 949 log.go:181] (0xc000a0c000) Data frame received for 5\nI0810 23:36:52.675258 949 log.go:181] (0xc00026c280) (5) Data frame handling\nI0810 23:36:52.675427 949 log.go:181] (0xc000a0c000) Data frame received for 3\nI0810 23:36:52.675438 949 log.go:181] (0xc000280000) (3) Data frame handling\nI0810 23:36:52.677335 949 log.go:181] (0xc000a0c000) Data frame received for 1\nI0810 23:36:52.677439 949 log.go:181] (0xc0002fabe0) (1) Data frame handling\nI0810 23:36:52.677522 949 log.go:181] (0xc0002fabe0) (1) Data frame sent\nI0810 23:36:52.677556 949 log.go:181] (0xc000a0c000) (0xc0002fabe0) Stream removed, broadcasting: 1\nI0810 23:36:52.677579 949 log.go:181] (0xc000a0c000) Go away received\nI0810 23:36:52.678018 949 log.go:181] (0xc000a0c000) (0xc0002fabe0) Stream removed, broadcasting: 1\nI0810 23:36:52.678049 949 log.go:181] (0xc000a0c000) (0xc000280000) Stream removed, broadcasting: 3\nI0810 23:36:52.678066 949 log.go:181] (0xc000a0c000) (0xc00026c280) Stream removed, broadcasting: 5\n" Aug 10 23:36:52.683: INFO: stdout: "\naffinity-nodeport-timeout-tltc8\naffinity-nodeport-timeout-tltc8\naffinity-nodeport-timeout-tltc8\naffinity-nodeport-timeout-tltc8\naffinity-nodeport-timeout-tltc8\naffinity-nodeport-timeout-tltc8\naffinity-nodeport-timeout-tltc8\naffinity-nodeport-timeout-tltc8\naffinity-nodeport-timeout-tltc8\naffinity-nodeport-timeout-tltc8\naffinity-nodeport-timeout-tltc8\naffinity-nodeport-timeout-tltc8\naffinity-nodeport-timeout-tltc8\naffinity-nodeport-timeout-tltc8\naffinity-nodeport-timeout-tltc8\naffinity-nodeport-timeout-tltc8" Aug 10 23:36:52.683: INFO: Received response from host: affinity-nodeport-timeout-tltc8 Aug 10 23:36:52.683: INFO: Received response from host: affinity-nodeport-timeout-tltc8 Aug 10 23:36:52.683: INFO: Received response from host: affinity-nodeport-timeout-tltc8 Aug 10 23:36:52.683: INFO: Received response from host: affinity-nodeport-timeout-tltc8 Aug 10 23:36:52.683: INFO: Received response from host: affinity-nodeport-timeout-tltc8 Aug 10 23:36:52.683: INFO: Received response from host: affinity-nodeport-timeout-tltc8 Aug 10 23:36:52.683: INFO: Received response from host: affinity-nodeport-timeout-tltc8 Aug 10 23:36:52.683: INFO: Received response from host: affinity-nodeport-timeout-tltc8 Aug 10 23:36:52.683: INFO: Received response from host: affinity-nodeport-timeout-tltc8 Aug 10 23:36:52.683: INFO: Received response from host: affinity-nodeport-timeout-tltc8 Aug 10 23:36:52.683: INFO: Received response from host: affinity-nodeport-timeout-tltc8 Aug 10 23:36:52.683: INFO: Received response from host: affinity-nodeport-timeout-tltc8 Aug 10 23:36:52.683: INFO: Received response from host: affinity-nodeport-timeout-tltc8 Aug 10 23:36:52.683: INFO: Received response from host: affinity-nodeport-timeout-tltc8 Aug 10 23:36:52.683: INFO: Received response from host: affinity-nodeport-timeout-tltc8 Aug 10 23:36:52.683: INFO: Received response from host: affinity-nodeport-timeout-tltc8 Aug 10 23:36:52.683: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-3793 execpod-affinityzxcs5 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.14:32634/' Aug 10 23:36:52.910: INFO: stderr: "I0810 23:36:52.817651 967 log.go:181] (0xc000ad56b0) (0xc0004cf180) Create stream\nI0810 23:36:52.817709 967 log.go:181] (0xc000ad56b0) (0xc0004cf180) Stream added, broadcasting: 1\nI0810 23:36:52.820121 967 log.go:181] (0xc000ad56b0) Reply frame received for 1\nI0810 23:36:52.820181 967 log.go:181] (0xc000ad56b0) (0xc000330320) Create stream\nI0810 23:36:52.820210 967 log.go:181] (0xc000ad56b0) (0xc000330320) Stream added, broadcasting: 3\nI0810 23:36:52.821199 967 log.go:181] (0xc000ad56b0) Reply frame received for 3\nI0810 23:36:52.821229 967 log.go:181] (0xc000ad56b0) (0xc0004cfe00) Create stream\nI0810 23:36:52.821238 967 log.go:181] (0xc000ad56b0) (0xc0004cfe00) Stream added, broadcasting: 5\nI0810 23:36:52.822013 967 log.go:181] (0xc000ad56b0) Reply frame received for 5\nI0810 23:36:52.897048 967 log.go:181] (0xc000ad56b0) Data frame received for 5\nI0810 23:36:52.897095 967 log.go:181] (0xc0004cfe00) (5) Data frame handling\nI0810 23:36:52.897125 967 log.go:181] (0xc0004cfe00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:36:52.901684 967 log.go:181] (0xc000ad56b0) Data frame received for 3\nI0810 23:36:52.901704 967 log.go:181] (0xc000330320) (3) Data frame handling\nI0810 23:36:52.901732 967 log.go:181] (0xc000330320) (3) Data frame sent\nI0810 23:36:52.902625 967 log.go:181] (0xc000ad56b0) Data frame received for 3\nI0810 23:36:52.902653 967 log.go:181] (0xc000330320) (3) Data frame handling\nI0810 23:36:52.902685 967 log.go:181] (0xc000ad56b0) Data frame received for 5\nI0810 23:36:52.902703 967 log.go:181] (0xc0004cfe00) (5) Data frame handling\nI0810 23:36:52.904502 967 log.go:181] (0xc000ad56b0) Data frame received for 1\nI0810 23:36:52.904529 967 log.go:181] (0xc0004cf180) (1) Data frame handling\nI0810 23:36:52.904576 967 log.go:181] (0xc0004cf180) (1) Data frame sent\nI0810 23:36:52.904601 967 log.go:181] (0xc000ad56b0) (0xc0004cf180) Stream removed, broadcasting: 1\nI0810 23:36:52.904623 967 log.go:181] (0xc000ad56b0) Go away received\nI0810 23:36:52.905126 967 log.go:181] (0xc000ad56b0) (0xc0004cf180) Stream removed, broadcasting: 1\nI0810 23:36:52.905153 967 log.go:181] (0xc000ad56b0) (0xc000330320) Stream removed, broadcasting: 3\nI0810 23:36:52.905162 967 log.go:181] (0xc000ad56b0) (0xc0004cfe00) Stream removed, broadcasting: 5\n" Aug 10 23:36:52.910: INFO: stdout: "affinity-nodeport-timeout-tltc8" Aug 10 23:37:07.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-3793 execpod-affinityzxcs5 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.14:32634/' Aug 10 23:37:08.163: INFO: stderr: "I0810 23:37:08.050605 985 log.go:181] (0xc00084b3f0) (0xc0009ba960) Create stream\nI0810 23:37:08.050666 985 log.go:181] (0xc00084b3f0) (0xc0009ba960) Stream added, broadcasting: 1\nI0810 23:37:08.054150 985 log.go:181] (0xc00084b3f0) Reply frame received for 1\nI0810 23:37:08.054182 985 log.go:181] (0xc00084b3f0) (0xc000720a00) Create stream\nI0810 23:37:08.054193 985 log.go:181] (0xc00084b3f0) (0xc000720a00) Stream added, broadcasting: 3\nI0810 23:37:08.055060 985 log.go:181] (0xc00084b3f0) Reply frame received for 3\nI0810 23:37:08.055108 985 log.go:181] (0xc00084b3f0) (0xc0003f95e0) Create stream\nI0810 23:37:08.055125 985 log.go:181] (0xc00084b3f0) (0xc0003f95e0) Stream added, broadcasting: 5\nI0810 23:37:08.055997 985 log.go:181] (0xc00084b3f0) Reply frame received for 5\nI0810 23:37:08.151068 985 log.go:181] (0xc00084b3f0) Data frame received for 5\nI0810 23:37:08.151101 985 log.go:181] (0xc0003f95e0) (5) Data frame handling\nI0810 23:37:08.151124 985 log.go:181] (0xc0003f95e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:37:08.155401 985 log.go:181] (0xc00084b3f0) Data frame received for 3\nI0810 23:37:08.155413 985 log.go:181] (0xc000720a00) (3) Data frame handling\nI0810 23:37:08.155422 985 log.go:181] (0xc000720a00) (3) Data frame sent\nI0810 23:37:08.156122 985 log.go:181] (0xc00084b3f0) Data frame received for 3\nI0810 23:37:08.156168 985 log.go:181] (0xc000720a00) (3) Data frame handling\nI0810 23:37:08.156217 985 log.go:181] (0xc00084b3f0) Data frame received for 5\nI0810 23:37:08.156247 985 log.go:181] (0xc0003f95e0) (5) Data frame handling\nI0810 23:37:08.158125 985 log.go:181] (0xc00084b3f0) Data frame received for 1\nI0810 23:37:08.158158 985 log.go:181] (0xc0009ba960) (1) Data frame handling\nI0810 23:37:08.158185 985 log.go:181] (0xc0009ba960) (1) Data frame sent\nI0810 23:37:08.158207 985 log.go:181] (0xc00084b3f0) (0xc0009ba960) Stream removed, broadcasting: 1\nI0810 23:37:08.158232 985 log.go:181] (0xc00084b3f0) Go away received\nI0810 23:37:08.158628 985 log.go:181] (0xc00084b3f0) (0xc0009ba960) Stream removed, broadcasting: 1\nI0810 23:37:08.158653 985 log.go:181] (0xc00084b3f0) (0xc000720a00) Stream removed, broadcasting: 3\nI0810 23:37:08.158668 985 log.go:181] (0xc00084b3f0) (0xc0003f95e0) Stream removed, broadcasting: 5\n" Aug 10 23:37:08.164: INFO: stdout: "affinity-nodeport-timeout-tltc8" Aug 10 23:37:23.164: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-3793 execpod-affinityzxcs5 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.14:32634/' Aug 10 23:37:23.408: INFO: stderr: "I0810 23:37:23.304295 1003 log.go:181] (0xc000e92fd0) (0xc0008ba460) Create stream\nI0810 23:37:23.304344 1003 log.go:181] (0xc000e92fd0) (0xc0008ba460) Stream added, broadcasting: 1\nI0810 23:37:23.307003 1003 log.go:181] (0xc000e92fd0) Reply frame received for 1\nI0810 23:37:23.307033 1003 log.go:181] (0xc000e92fd0) (0xc000534500) Create stream\nI0810 23:37:23.307040 1003 log.go:181] (0xc000e92fd0) (0xc000534500) Stream added, broadcasting: 3\nI0810 23:37:23.307693 1003 log.go:181] (0xc000e92fd0) Reply frame received for 3\nI0810 23:37:23.307719 1003 log.go:181] (0xc000e92fd0) (0xc000a7d0e0) Create stream\nI0810 23:37:23.307735 1003 log.go:181] (0xc000e92fd0) (0xc000a7d0e0) Stream added, broadcasting: 5\nI0810 23:37:23.308255 1003 log.go:181] (0xc000e92fd0) Reply frame received for 5\nI0810 23:37:23.395317 1003 log.go:181] (0xc000e92fd0) Data frame received for 5\nI0810 23:37:23.395341 1003 log.go:181] (0xc000a7d0e0) (5) Data frame handling\nI0810 23:37:23.395357 1003 log.go:181] (0xc000a7d0e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:37:23.401013 1003 log.go:181] (0xc000e92fd0) Data frame received for 3\nI0810 23:37:23.401040 1003 log.go:181] (0xc000534500) (3) Data frame handling\nI0810 23:37:23.401070 1003 log.go:181] (0xc000534500) (3) Data frame sent\nI0810 23:37:23.401751 1003 log.go:181] (0xc000e92fd0) Data frame received for 5\nI0810 23:37:23.401782 1003 log.go:181] (0xc000a7d0e0) (5) Data frame handling\nI0810 23:37:23.401981 1003 log.go:181] (0xc000e92fd0) Data frame received for 3\nI0810 23:37:23.401996 1003 log.go:181] (0xc000534500) (3) Data frame handling\nI0810 23:37:23.403920 1003 log.go:181] (0xc000e92fd0) Data frame received for 1\nI0810 23:37:23.403933 1003 log.go:181] (0xc0008ba460) (1) Data frame handling\nI0810 23:37:23.403947 1003 log.go:181] (0xc0008ba460) (1) Data frame sent\nI0810 23:37:23.403961 1003 log.go:181] (0xc000e92fd0) (0xc0008ba460) Stream removed, broadcasting: 1\nI0810 23:37:23.404057 1003 log.go:181] (0xc000e92fd0) Go away received\nI0810 23:37:23.404243 1003 log.go:181] (0xc000e92fd0) (0xc0008ba460) Stream removed, broadcasting: 1\nI0810 23:37:23.404259 1003 log.go:181] (0xc000e92fd0) (0xc000534500) Stream removed, broadcasting: 3\nI0810 23:37:23.404266 1003 log.go:181] (0xc000e92fd0) (0xc000a7d0e0) Stream removed, broadcasting: 5\n" Aug 10 23:37:23.409: INFO: stdout: "affinity-nodeport-timeout-tltc8" Aug 10 23:37:38.409: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-3793 execpod-affinityzxcs5 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.14:32634/' Aug 10 23:37:38.647: INFO: stderr: "I0810 23:37:38.560927 1021 log.go:181] (0xc000736d10) (0xc000d903c0) Create stream\nI0810 23:37:38.560997 1021 log.go:181] (0xc000736d10) (0xc000d903c0) Stream added, broadcasting: 1\nI0810 23:37:38.565357 1021 log.go:181] (0xc000736d10) Reply frame received for 1\nI0810 23:37:38.565397 1021 log.go:181] (0xc000736d10) (0xc000b83180) Create stream\nI0810 23:37:38.565409 1021 log.go:181] (0xc000736d10) (0xc000b83180) Stream added, broadcasting: 3\nI0810 23:37:38.566424 1021 log.go:181] (0xc000736d10) Reply frame received for 3\nI0810 23:37:38.566457 1021 log.go:181] (0xc000736d10) (0xc000b7a460) Create stream\nI0810 23:37:38.566470 1021 log.go:181] (0xc000736d10) (0xc000b7a460) Stream added, broadcasting: 5\nI0810 23:37:38.567557 1021 log.go:181] (0xc000736d10) Reply frame received for 5\nI0810 23:37:38.638298 1021 log.go:181] (0xc000736d10) Data frame received for 5\nI0810 23:37:38.638331 1021 log.go:181] (0xc000b7a460) (5) Data frame handling\nI0810 23:37:38.638351 1021 log.go:181] (0xc000b7a460) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32634/\nI0810 23:37:38.640120 1021 log.go:181] (0xc000736d10) Data frame received for 3\nI0810 23:37:38.640140 1021 log.go:181] (0xc000b83180) (3) Data frame handling\nI0810 23:37:38.640154 1021 log.go:181] (0xc000b83180) (3) Data frame sent\nI0810 23:37:38.640673 1021 log.go:181] (0xc000736d10) Data frame received for 5\nI0810 23:37:38.640701 1021 log.go:181] (0xc000b7a460) (5) Data frame handling\nI0810 23:37:38.640826 1021 log.go:181] (0xc000736d10) Data frame received for 3\nI0810 23:37:38.640858 1021 log.go:181] (0xc000b83180) (3) Data frame handling\nI0810 23:37:38.642725 1021 log.go:181] (0xc000736d10) Data frame received for 1\nI0810 23:37:38.642752 1021 log.go:181] (0xc000d903c0) (1) Data frame handling\nI0810 23:37:38.642770 1021 log.go:181] (0xc000d903c0) (1) Data frame sent\nI0810 23:37:38.642793 1021 log.go:181] (0xc000736d10) (0xc000d903c0) Stream removed, broadcasting: 1\nI0810 23:37:38.642822 1021 log.go:181] (0xc000736d10) Go away received\nI0810 23:37:38.643178 1021 log.go:181] (0xc000736d10) (0xc000d903c0) Stream removed, broadcasting: 1\nI0810 23:37:38.643192 1021 log.go:181] (0xc000736d10) (0xc000b83180) Stream removed, broadcasting: 3\nI0810 23:37:38.643199 1021 log.go:181] (0xc000736d10) (0xc000b7a460) Stream removed, broadcasting: 5\n" Aug 10 23:37:38.647: INFO: stdout: "affinity-nodeport-timeout-lwc5q" Aug 10 23:37:38.647: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-3793, will wait for the garbage collector to delete the pods Aug 10 23:37:38.790: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 6.487767ms Aug 10 23:37:39.190: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 400.181984ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:37:54.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3793" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:84.474 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":43,"skipped":759,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:37:54.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 10 23:37:54.179: INFO: Waiting up to 5m0s for pod "downward-api-96195441-8d44-4d78-8dd1-86abc6cb0d4f" in namespace "downward-api-2096" to be "Succeeded or Failed" Aug 10 23:37:54.196: INFO: Pod "downward-api-96195441-8d44-4d78-8dd1-86abc6cb0d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.090298ms Aug 10 23:37:56.200: INFO: Pod "downward-api-96195441-8d44-4d78-8dd1-86abc6cb0d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021326939s Aug 10 23:37:58.205: INFO: Pod "downward-api-96195441-8d44-4d78-8dd1-86abc6cb0d4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025642542s STEP: Saw pod success Aug 10 23:37:58.205: INFO: Pod "downward-api-96195441-8d44-4d78-8dd1-86abc6cb0d4f" satisfied condition "Succeeded or Failed" Aug 10 23:37:58.208: INFO: Trying to get logs from node latest-worker2 pod downward-api-96195441-8d44-4d78-8dd1-86abc6cb0d4f container dapi-container: STEP: delete the pod Aug 10 23:37:58.285: INFO: Waiting for pod downward-api-96195441-8d44-4d78-8dd1-86abc6cb0d4f to disappear Aug 10 23:37:58.295: INFO: Pod downward-api-96195441-8d44-4d78-8dd1-86abc6cb0d4f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:37:58.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2096" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":303,"completed":44,"skipped":764,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:37:58.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 23:37:58.869: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 23:38:00.881: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732699478, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732699478, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732699478, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732699478, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 23:38:03.926: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:38:04.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4845" for this suite. STEP: Destroying namespace "webhook-4845-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.342 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":303,"completed":45,"skipped":779,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:38:04.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-6544/secret-test-d3ae1005-8bda-488e-9379-2a67dfd9372f STEP: Creating a pod to test consume secrets Aug 10 23:38:04.765: INFO: Waiting up to 5m0s for pod "pod-configmaps-f1ac9b01-ee78-475a-8e47-5757a46debe0" in namespace "secrets-6544" to be "Succeeded or Failed" Aug 10 23:38:04.777: INFO: Pod "pod-configmaps-f1ac9b01-ee78-475a-8e47-5757a46debe0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.675432ms Aug 10 23:38:06.781: INFO: Pod "pod-configmaps-f1ac9b01-ee78-475a-8e47-5757a46debe0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015581065s Aug 10 23:38:08.786: INFO: Pod "pod-configmaps-f1ac9b01-ee78-475a-8e47-5757a46debe0": Phase="Running", Reason="", readiness=true. Elapsed: 4.020239625s Aug 10 23:38:10.790: INFO: Pod "pod-configmaps-f1ac9b01-ee78-475a-8e47-5757a46debe0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02500227s STEP: Saw pod success Aug 10 23:38:10.791: INFO: Pod "pod-configmaps-f1ac9b01-ee78-475a-8e47-5757a46debe0" satisfied condition "Succeeded or Failed" Aug 10 23:38:10.794: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-f1ac9b01-ee78-475a-8e47-5757a46debe0 container env-test: STEP: delete the pod Aug 10 23:38:10.829: INFO: Waiting for pod pod-configmaps-f1ac9b01-ee78-475a-8e47-5757a46debe0 to disappear Aug 10 23:38:10.839: INFO: Pod pod-configmaps-f1ac9b01-ee78-475a-8e47-5757a46debe0 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:38:10.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6544" for this suite. • [SLOW TEST:6.200 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":46,"skipped":786,"failed":0} S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:38:10.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 10 23:38:15.488: INFO: Successfully updated pod "pod-update-activedeadlineseconds-11dbdeac-3dc1-409a-9037-c5363ff85d6d" Aug 10 23:38:15.488: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-11dbdeac-3dc1-409a-9037-c5363ff85d6d" in namespace "pods-1018" to be "terminated due to deadline exceeded" Aug 10 23:38:15.607: INFO: Pod "pod-update-activedeadlineseconds-11dbdeac-3dc1-409a-9037-c5363ff85d6d": Phase="Running", Reason="", readiness=true. Elapsed: 118.47416ms Aug 10 23:38:17.611: INFO: Pod "pod-update-activedeadlineseconds-11dbdeac-3dc1-409a-9037-c5363ff85d6d": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.122895366s Aug 10 23:38:17.611: INFO: Pod "pod-update-activedeadlineseconds-11dbdeac-3dc1-409a-9037-c5363ff85d6d" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:38:17.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1018" for this suite. • [SLOW TEST:6.775 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":303,"completed":47,"skipped":787,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:38:17.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:38:17.786: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:38:18.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9692" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":303,"completed":48,"skipped":797,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:38:18.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:38:19.025: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 10 23:38:22.063: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7280 create -f -' Aug 10 23:38:25.615: INFO: stderr: "" Aug 10 23:38:25.615: INFO: stdout: "e2e-test-crd-publish-openapi-9444-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 10 23:38:25.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7280 delete e2e-test-crd-publish-openapi-9444-crds test-cr' Aug 10 23:38:25.726: INFO: stderr: "" Aug 10 23:38:25.726: INFO: stdout: "e2e-test-crd-publish-openapi-9444-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Aug 10 23:38:25.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7280 apply -f -' Aug 10 23:38:26.059: INFO: stderr: "" Aug 10 23:38:26.059: INFO: stdout: "e2e-test-crd-publish-openapi-9444-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 10 23:38:26.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7280 delete e2e-test-crd-publish-openapi-9444-crds test-cr' Aug 10 23:38:26.181: INFO: stderr: "" Aug 10 23:38:26.181: INFO: stdout: "e2e-test-crd-publish-openapi-9444-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Aug 10 23:38:26.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9444-crds' Aug 10 23:38:26.520: INFO: stderr: "" Aug 10 23:38:26.520: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9444-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:38:29.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7280" for this suite. • [SLOW TEST:10.599 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":303,"completed":49,"skipped":800,"failed":0} SSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:38:29.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:38:29.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-8849" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":303,"completed":50,"skipped":808,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:38:29.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 10 23:38:29.759: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2833 /api/v1/namespaces/watch-2833/configmaps/e2e-watch-test-watch-closed 919747cb-c68a-435e-8a48-e57fb5b3f063 6038141 0 2020-08-10 23:38:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-10 23:38:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 10 23:38:29.759: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2833 /api/v1/namespaces/watch-2833/configmaps/e2e-watch-test-watch-closed 919747cb-c68a-435e-8a48-e57fb5b3f063 6038142 0 2020-08-10 23:38:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-10 23:38:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 10 23:38:29.826: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2833 /api/v1/namespaces/watch-2833/configmaps/e2e-watch-test-watch-closed 919747cb-c68a-435e-8a48-e57fb5b3f063 6038143 0 2020-08-10 23:38:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-10 23:38:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 10 23:38:29.826: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2833 /api/v1/namespaces/watch-2833/configmaps/e2e-watch-test-watch-closed 919747cb-c68a-435e-8a48-e57fb5b3f063 6038144 0 2020-08-10 23:38:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-10 23:38:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:38:29.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2833" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":303,"completed":51,"skipped":829,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:38:29.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:38:29.887: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:38:34.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2268" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":303,"completed":52,"skipped":857,"failed":0} S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:38:34.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 10 23:38:34.910: INFO: Pod name wrapped-volume-race-242c42a9-df2f-40b9-aee1-2f55764c07a3: Found 0 pods out of 5 Aug 10 23:38:39.935: INFO: Pod name wrapped-volume-race-242c42a9-df2f-40b9-aee1-2f55764c07a3: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-242c42a9-df2f-40b9-aee1-2f55764c07a3 in namespace emptydir-wrapper-5544, will wait for the garbage collector to delete the pods Aug 10 23:38:54.146: INFO: Deleting ReplicationController wrapped-volume-race-242c42a9-df2f-40b9-aee1-2f55764c07a3 took: 43.151927ms Aug 10 23:38:54.646: INFO: Terminating ReplicationController wrapped-volume-race-242c42a9-df2f-40b9-aee1-2f55764c07a3 pods took: 500.246667ms STEP: Creating RC which spawns configmap-volume pods Aug 10 23:39:03.393: INFO: Pod name wrapped-volume-race-970249dd-6e91-416d-bb8d-0436e13afc17: Found 0 pods out of 5 Aug 10 23:39:08.401: INFO: Pod name wrapped-volume-race-970249dd-6e91-416d-bb8d-0436e13afc17: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-970249dd-6e91-416d-bb8d-0436e13afc17 in namespace emptydir-wrapper-5544, will wait for the garbage collector to delete the pods Aug 10 23:39:22.485: INFO: Deleting ReplicationController wrapped-volume-race-970249dd-6e91-416d-bb8d-0436e13afc17 took: 8.810196ms Aug 10 23:39:22.885: INFO: Terminating ReplicationController wrapped-volume-race-970249dd-6e91-416d-bb8d-0436e13afc17 pods took: 400.213209ms STEP: Creating RC which spawns configmap-volume pods Aug 10 23:39:33.500: INFO: Pod name wrapped-volume-race-284b75d4-e64b-4a9f-a75d-fc0b520b7638: Found 0 pods out of 5 Aug 10 23:39:38.514: INFO: Pod name wrapped-volume-race-284b75d4-e64b-4a9f-a75d-fc0b520b7638: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-284b75d4-e64b-4a9f-a75d-fc0b520b7638 in namespace emptydir-wrapper-5544, will wait for the garbage collector to delete the pods Aug 10 23:39:52.626: INFO: Deleting ReplicationController wrapped-volume-race-284b75d4-e64b-4a9f-a75d-fc0b520b7638 took: 12.16252ms Aug 10 23:39:53.026: INFO: Terminating ReplicationController wrapped-volume-race-284b75d4-e64b-4a9f-a75d-fc0b520b7638 pods took: 400.221875ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:40:04.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5544" for this suite. • [SLOW TEST:90.136 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":303,"completed":53,"skipped":858,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:40:04.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Aug 10 23:40:04.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2311' Aug 10 23:40:04.733: INFO: stderr: "" Aug 10 23:40:04.733: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 10 23:40:04.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2311' Aug 10 23:40:04.879: INFO: stderr: "" Aug 10 23:40:04.879: INFO: stdout: "update-demo-nautilus-jq4kr update-demo-nautilus-l8vw5 " Aug 10 23:40:04.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jq4kr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2311' Aug 10 23:40:04.991: INFO: stderr: "" Aug 10 23:40:04.991: INFO: stdout: "" Aug 10 23:40:04.991: INFO: update-demo-nautilus-jq4kr is created but not running Aug 10 23:40:09.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2311' Aug 10 23:40:10.157: INFO: stderr: "" Aug 10 23:40:10.157: INFO: stdout: "update-demo-nautilus-jq4kr update-demo-nautilus-l8vw5 " Aug 10 23:40:10.157: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jq4kr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2311' Aug 10 23:40:10.280: INFO: stderr: "" Aug 10 23:40:10.280: INFO: stdout: "true" Aug 10 23:40:10.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jq4kr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2311' Aug 10 23:40:10.423: INFO: stderr: "" Aug 10 23:40:10.423: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 10 23:40:10.423: INFO: validating pod update-demo-nautilus-jq4kr Aug 10 23:40:10.435: INFO: got data: { "image": "nautilus.jpg" } Aug 10 23:40:10.435: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 10 23:40:10.436: INFO: update-demo-nautilus-jq4kr is verified up and running Aug 10 23:40:10.436: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l8vw5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2311' Aug 10 23:40:10.555: INFO: stderr: "" Aug 10 23:40:10.555: INFO: stdout: "true" Aug 10 23:40:10.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l8vw5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2311' Aug 10 23:40:10.670: INFO: stderr: "" Aug 10 23:40:10.670: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 10 23:40:10.670: INFO: validating pod update-demo-nautilus-l8vw5 Aug 10 23:40:10.685: INFO: got data: { "image": "nautilus.jpg" } Aug 10 23:40:10.685: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 10 23:40:10.685: INFO: update-demo-nautilus-l8vw5 is verified up and running STEP: using delete to clean up resources Aug 10 23:40:10.685: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2311' Aug 10 23:40:10.827: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 10 23:40:10.827: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 10 23:40:10.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2311' Aug 10 23:40:10.971: INFO: stderr: "No resources found in kubectl-2311 namespace.\n" Aug 10 23:40:10.972: INFO: stdout: "" Aug 10 23:40:10.972: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2311 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 10 23:40:11.080: INFO: stderr: "" Aug 10 23:40:11.080: INFO: stdout: "update-demo-nautilus-jq4kr\nupdate-demo-nautilus-l8vw5\n" Aug 10 23:40:11.580: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2311' Aug 10 23:40:11.864: INFO: stderr: "No resources found in kubectl-2311 namespace.\n" Aug 10 23:40:11.864: INFO: stdout: "" Aug 10 23:40:11.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2311 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 10 23:40:11.982: INFO: stderr: "" Aug 10 23:40:11.982: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:40:11.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2311" for this suite. • [SLOW TEST:7.963 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":303,"completed":54,"skipped":872,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:40:12.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 10 23:40:12.780: INFO: Waiting up to 5m0s for pod "pod-adfd6abe-0001-4eb3-8e53-a96356db07ae" in namespace "emptydir-4435" to be "Succeeded or Failed" Aug 10 23:40:12.824: INFO: Pod "pod-adfd6abe-0001-4eb3-8e53-a96356db07ae": Phase="Pending", Reason="", readiness=false. Elapsed: 43.911241ms Aug 10 23:40:14.828: INFO: Pod "pod-adfd6abe-0001-4eb3-8e53-a96356db07ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048162207s Aug 10 23:40:16.832: INFO: Pod "pod-adfd6abe-0001-4eb3-8e53-a96356db07ae": Phase="Running", Reason="", readiness=true. Elapsed: 4.052263021s Aug 10 23:40:18.836: INFO: Pod "pod-adfd6abe-0001-4eb3-8e53-a96356db07ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056621894s STEP: Saw pod success Aug 10 23:40:18.837: INFO: Pod "pod-adfd6abe-0001-4eb3-8e53-a96356db07ae" satisfied condition "Succeeded or Failed" Aug 10 23:40:18.840: INFO: Trying to get logs from node latest-worker2 pod pod-adfd6abe-0001-4eb3-8e53-a96356db07ae container test-container: STEP: delete the pod Aug 10 23:40:18.884: INFO: Waiting for pod pod-adfd6abe-0001-4eb3-8e53-a96356db07ae to disappear Aug 10 23:40:18.913: INFO: Pod pod-adfd6abe-0001-4eb3-8e53-a96356db07ae no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:40:18.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4435" for this suite. • [SLOW TEST:6.726 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":55,"skipped":887,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:40:18.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7710 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 10 23:40:19.000: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 10 23:40:19.095: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 10 23:40:21.099: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 10 23:40:23.099: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 23:40:25.099: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 23:40:27.099: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 23:40:29.098: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 23:40:31.104: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 23:40:33.099: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 23:40:35.098: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 10 23:40:35.103: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 10 23:40:39.231: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.158:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7710 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 23:40:39.231: INFO: >>> kubeConfig: /root/.kube/config I0810 23:40:39.261059 7 log.go:181] (0xc0001246e0) (0xc0071477c0) Create stream I0810 23:40:39.261096 7 log.go:181] (0xc0001246e0) (0xc0071477c0) Stream added, broadcasting: 1 I0810 23:40:39.265028 7 log.go:181] (0xc0001246e0) Reply frame received for 1 I0810 23:40:39.265091 7 log.go:181] (0xc0001246e0) (0xc007147860) Create stream I0810 23:40:39.265120 7 log.go:181] (0xc0001246e0) (0xc007147860) Stream added, broadcasting: 3 I0810 23:40:39.266714 7 log.go:181] (0xc0001246e0) Reply frame received for 3 I0810 23:40:39.266745 7 log.go:181] (0xc0001246e0) (0xc007147900) Create stream I0810 23:40:39.266763 7 log.go:181] (0xc0001246e0) (0xc007147900) Stream added, broadcasting: 5 I0810 23:40:39.267463 7 log.go:181] (0xc0001246e0) Reply frame received for 5 I0810 23:40:39.335160 7 log.go:181] (0xc0001246e0) Data frame received for 3 I0810 23:40:39.335210 7 log.go:181] (0xc007147860) (3) Data frame handling I0810 23:40:39.335250 7 log.go:181] (0xc007147860) (3) Data frame sent I0810 23:40:39.335274 7 log.go:181] (0xc0001246e0) Data frame received for 3 I0810 23:40:39.335298 7 log.go:181] (0xc0001246e0) Data frame received for 5 I0810 23:40:39.335326 7 log.go:181] (0xc007147900) (5) Data frame handling I0810 23:40:39.335357 7 log.go:181] (0xc007147860) (3) Data frame handling I0810 23:40:39.344444 7 log.go:181] (0xc0001246e0) Data frame received for 1 I0810 23:40:39.344467 7 log.go:181] (0xc0071477c0) (1) Data frame handling I0810 23:40:39.344494 7 log.go:181] (0xc0071477c0) (1) Data frame sent I0810 23:40:39.344510 7 log.go:181] (0xc0001246e0) (0xc0071477c0) Stream removed, broadcasting: 1 I0810 23:40:39.344526 7 log.go:181] (0xc0001246e0) Go away received I0810 23:40:39.344865 7 log.go:181] (0xc0001246e0) (0xc0071477c0) Stream removed, broadcasting: 1 I0810 23:40:39.344884 7 log.go:181] (0xc0001246e0) (0xc007147860) Stream removed, broadcasting: 3 I0810 23:40:39.344897 7 log.go:181] (0xc0001246e0) (0xc007147900) Stream removed, broadcasting: 5 Aug 10 23:40:39.344: INFO: Found all expected endpoints: [netserver-0] Aug 10 23:40:39.357: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.231:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7710 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 23:40:39.357: INFO: >>> kubeConfig: /root/.kube/config I0810 23:40:39.387313 7 log.go:181] (0xc001742160) (0xc007016aa0) Create stream I0810 23:40:39.387344 7 log.go:181] (0xc001742160) (0xc007016aa0) Stream added, broadcasting: 1 I0810 23:40:39.389010 7 log.go:181] (0xc001742160) Reply frame received for 1 I0810 23:40:39.389037 7 log.go:181] (0xc001742160) (0xc007016b40) Create stream I0810 23:40:39.389042 7 log.go:181] (0xc001742160) (0xc007016b40) Stream added, broadcasting: 3 I0810 23:40:39.390017 7 log.go:181] (0xc001742160) Reply frame received for 3 I0810 23:40:39.390051 7 log.go:181] (0xc001742160) (0xc007147ae0) Create stream I0810 23:40:39.390067 7 log.go:181] (0xc001742160) (0xc007147ae0) Stream added, broadcasting: 5 I0810 23:40:39.390891 7 log.go:181] (0xc001742160) Reply frame received for 5 I0810 23:40:39.469866 7 log.go:181] (0xc001742160) Data frame received for 3 I0810 23:40:39.469898 7 log.go:181] (0xc007016b40) (3) Data frame handling I0810 23:40:39.469908 7 log.go:181] (0xc007016b40) (3) Data frame sent I0810 23:40:39.469916 7 log.go:181] (0xc001742160) Data frame received for 3 I0810 23:40:39.469922 7 log.go:181] (0xc007016b40) (3) Data frame handling I0810 23:40:39.469956 7 log.go:181] (0xc001742160) Data frame received for 5 I0810 23:40:39.469981 7 log.go:181] (0xc007147ae0) (5) Data frame handling I0810 23:40:39.471721 7 log.go:181] (0xc001742160) Data frame received for 1 I0810 23:40:39.471738 7 log.go:181] (0xc007016aa0) (1) Data frame handling I0810 23:40:39.471749 7 log.go:181] (0xc007016aa0) (1) Data frame sent I0810 23:40:39.471769 7 log.go:181] (0xc001742160) (0xc007016aa0) Stream removed, broadcasting: 1 I0810 23:40:39.471794 7 log.go:181] (0xc001742160) Go away received I0810 23:40:39.471861 7 log.go:181] (0xc001742160) (0xc007016aa0) Stream removed, broadcasting: 1 I0810 23:40:39.471900 7 log.go:181] (0xc001742160) (0xc007016b40) Stream removed, broadcasting: 3 I0810 23:40:39.471932 7 log.go:181] (0xc001742160) (0xc007147ae0) Stream removed, broadcasting: 5 Aug 10 23:40:39.471: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:40:39.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7710" for this suite. • [SLOW TEST:20.560 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":56,"skipped":893,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:40:39.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:41:10.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4486" for this suite. • [SLOW TEST:31.484 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":303,"completed":57,"skipped":926,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:41:10.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 23:41:11.694: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 23:41:13.807: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732699671, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732699671, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732699671, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732699671, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 23:41:16.861: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Aug 10 23:41:16.885: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:41:16.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2071" for this suite. STEP: Destroying namespace "webhook-2071-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.112 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":303,"completed":58,"skipped":941,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:41:17.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0810 23:41:29.327980 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 10 23:42:31.359: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Aug 10 23:42:31.359: INFO: Deleting pod "simpletest-rc-to-be-deleted-5nm8c" in namespace "gc-6082" Aug 10 23:42:31.410: INFO: Deleting pod "simpletest-rc-to-be-deleted-94ndh" in namespace "gc-6082" Aug 10 23:42:31.559: INFO: Deleting pod "simpletest-rc-to-be-deleted-dh4gn" in namespace "gc-6082" Aug 10 23:42:32.069: INFO: Deleting pod "simpletest-rc-to-be-deleted-fjpdm" in namespace "gc-6082" Aug 10 23:42:32.390: INFO: Deleting pod "simpletest-rc-to-be-deleted-fjszs" in namespace "gc-6082" [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:42:32.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6082" for this suite. • [SLOW TEST:75.985 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":303,"completed":59,"skipped":985,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:42:33.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-e3175bd7-0d19-4169-95db-eaaf9d4648f5 STEP: Creating a pod to test consume configMaps Aug 10 23:42:33.868: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dc799154-7c44-45d3-ac26-831d7a3a912c" in namespace "projected-6518" to be "Succeeded or Failed" Aug 10 23:42:33.884: INFO: Pod "pod-projected-configmaps-dc799154-7c44-45d3-ac26-831d7a3a912c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.399276ms Aug 10 23:42:35.889: INFO: Pod "pod-projected-configmaps-dc799154-7c44-45d3-ac26-831d7a3a912c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021215565s Aug 10 23:42:37.894: INFO: Pod "pod-projected-configmaps-dc799154-7c44-45d3-ac26-831d7a3a912c": Phase="Running", Reason="", readiness=true. Elapsed: 4.02588085s Aug 10 23:42:39.898: INFO: Pod "pod-projected-configmaps-dc799154-7c44-45d3-ac26-831d7a3a912c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030514681s STEP: Saw pod success Aug 10 23:42:39.899: INFO: Pod "pod-projected-configmaps-dc799154-7c44-45d3-ac26-831d7a3a912c" satisfied condition "Succeeded or Failed" Aug 10 23:42:39.902: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-dc799154-7c44-45d3-ac26-831d7a3a912c container projected-configmap-volume-test: STEP: delete the pod Aug 10 23:42:39.935: INFO: Waiting for pod pod-projected-configmaps-dc799154-7c44-45d3-ac26-831d7a3a912c to disappear Aug 10 23:42:39.955: INFO: Pod pod-projected-configmaps-dc799154-7c44-45d3-ac26-831d7a3a912c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:42:39.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6518" for this suite. • [SLOW TEST:6.898 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":60,"skipped":994,"failed":0} SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:42:39.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 10 23:42:44.075: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:42:44.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2534" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":303,"completed":61,"skipped":999,"failed":0} SS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:42:44.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Aug 10 23:42:48.247: INFO: Pod pod-hostip-f972c1cc-1b1a-47d7-be5c-67e6ba53d100 has hostIP: 172.18.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:42:48.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2307" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":303,"completed":62,"skipped":1001,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:42:48.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4953 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4953;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4953 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4953;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4953.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4953.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4953.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4953.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4953.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4953.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4953.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4953.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4953.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4953.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4953.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4953.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4953.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 200.106.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.106.200_udp@PTR;check="$$(dig +tcp +noall +answer +search 200.106.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.106.200_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4953 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4953;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4953 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4953;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4953.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4953.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4953.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4953.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4953.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4953.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4953.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4953.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4953.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4953.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4953.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4953.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4953.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 200.106.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.106.200_udp@PTR;check="$$(dig +tcp +noall +answer +search 200.106.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.106.200_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 10 23:42:54.538: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:54.541: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:54.544: INFO: Unable to read wheezy_udp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:54.546: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:54.575: INFO: Unable to read wheezy_udp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:54.578: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:54.582: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:54.585: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:54.606: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:54.609: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:54.612: INFO: Unable to read jessie_udp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:54.615: INFO: Unable to read jessie_tcp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:54.618: INFO: Unable to read jessie_udp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:54.621: INFO: Unable to read jessie_tcp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:54.624: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:54.626: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:54.645: INFO: Lookups using dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4953 wheezy_tcp@dns-test-service.dns-4953 wheezy_udp@dns-test-service.dns-4953.svc wheezy_tcp@dns-test-service.dns-4953.svc wheezy_udp@_http._tcp.dns-test-service.dns-4953.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4953.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4953 jessie_tcp@dns-test-service.dns-4953 jessie_udp@dns-test-service.dns-4953.svc jessie_tcp@dns-test-service.dns-4953.svc jessie_udp@_http._tcp.dns-test-service.dns-4953.svc jessie_tcp@_http._tcp.dns-test-service.dns-4953.svc] Aug 10 23:42:59.651: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:59.655: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:59.658: INFO: Unable to read wheezy_udp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:59.661: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:59.664: INFO: Unable to read wheezy_udp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:59.667: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:59.670: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:59.673: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:59.696: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:59.698: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:59.700: INFO: Unable to read jessie_udp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:59.703: INFO: Unable to read jessie_tcp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:59.705: INFO: Unable to read jessie_udp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:59.708: INFO: Unable to read jessie_tcp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:59.710: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:59.713: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:42:59.733: INFO: Lookups using dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4953 wheezy_tcp@dns-test-service.dns-4953 wheezy_udp@dns-test-service.dns-4953.svc wheezy_tcp@dns-test-service.dns-4953.svc wheezy_udp@_http._tcp.dns-test-service.dns-4953.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4953.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4953 jessie_tcp@dns-test-service.dns-4953 jessie_udp@dns-test-service.dns-4953.svc jessie_tcp@dns-test-service.dns-4953.svc jessie_udp@_http._tcp.dns-test-service.dns-4953.svc jessie_tcp@_http._tcp.dns-test-service.dns-4953.svc] Aug 10 23:43:04.650: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:04.653: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:04.655: INFO: Unable to read wheezy_udp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:04.659: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:04.683: INFO: Unable to read wheezy_udp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:04.687: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:04.690: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:04.693: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:04.715: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:04.719: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:04.723: INFO: Unable to read jessie_udp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:04.726: INFO: Unable to read jessie_tcp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:04.729: INFO: Unable to read jessie_udp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:04.731: INFO: Unable to read jessie_tcp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:04.734: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:04.736: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:04.754: INFO: Lookups using dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4953 wheezy_tcp@dns-test-service.dns-4953 wheezy_udp@dns-test-service.dns-4953.svc wheezy_tcp@dns-test-service.dns-4953.svc wheezy_udp@_http._tcp.dns-test-service.dns-4953.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4953.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4953 jessie_tcp@dns-test-service.dns-4953 jessie_udp@dns-test-service.dns-4953.svc jessie_tcp@dns-test-service.dns-4953.svc jessie_udp@_http._tcp.dns-test-service.dns-4953.svc jessie_tcp@_http._tcp.dns-test-service.dns-4953.svc] Aug 10 23:43:09.650: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:09.654: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:09.657: INFO: Unable to read wheezy_udp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:09.661: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:09.664: INFO: Unable to read wheezy_udp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:09.666: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:09.669: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:09.672: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:09.690: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:09.693: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:09.695: INFO: Unable to read jessie_udp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:09.698: INFO: Unable to read jessie_tcp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:09.700: INFO: Unable to read jessie_udp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:09.703: INFO: Unable to read jessie_tcp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:09.705: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:09.708: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:09.724: INFO: Lookups using dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4953 wheezy_tcp@dns-test-service.dns-4953 wheezy_udp@dns-test-service.dns-4953.svc wheezy_tcp@dns-test-service.dns-4953.svc wheezy_udp@_http._tcp.dns-test-service.dns-4953.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4953.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4953 jessie_tcp@dns-test-service.dns-4953 jessie_udp@dns-test-service.dns-4953.svc jessie_tcp@dns-test-service.dns-4953.svc jessie_udp@_http._tcp.dns-test-service.dns-4953.svc jessie_tcp@_http._tcp.dns-test-service.dns-4953.svc] Aug 10 23:43:14.651: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:14.654: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:14.658: INFO: Unable to read wheezy_udp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:14.661: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:14.664: INFO: Unable to read wheezy_udp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:14.667: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:14.670: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:14.673: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:14.694: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:14.697: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:14.700: INFO: Unable to read jessie_udp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:14.703: INFO: Unable to read jessie_tcp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:14.706: INFO: Unable to read jessie_udp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:14.708: INFO: Unable to read jessie_tcp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:14.711: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:14.714: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:14.738: INFO: Lookups using dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4953 wheezy_tcp@dns-test-service.dns-4953 wheezy_udp@dns-test-service.dns-4953.svc wheezy_tcp@dns-test-service.dns-4953.svc wheezy_udp@_http._tcp.dns-test-service.dns-4953.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4953.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4953 jessie_tcp@dns-test-service.dns-4953 jessie_udp@dns-test-service.dns-4953.svc jessie_tcp@dns-test-service.dns-4953.svc jessie_udp@_http._tcp.dns-test-service.dns-4953.svc jessie_tcp@_http._tcp.dns-test-service.dns-4953.svc] Aug 10 23:43:19.650: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:19.654: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:19.657: INFO: Unable to read wheezy_udp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:19.660: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:19.663: INFO: Unable to read wheezy_udp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:19.666: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:19.668: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:19.670: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:19.690: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:19.694: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:19.696: INFO: Unable to read jessie_udp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:19.699: INFO: Unable to read jessie_tcp@dns-test-service.dns-4953 from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:19.702: INFO: Unable to read jessie_udp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:19.705: INFO: Unable to read jessie_tcp@dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:19.708: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:19.710: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4953.svc from pod dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d: the server could not find the requested resource (get pods dns-test-682b58e6-9707-46c4-bfba-409e3210973d) Aug 10 23:43:19.728: INFO: Lookups using dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4953 wheezy_tcp@dns-test-service.dns-4953 wheezy_udp@dns-test-service.dns-4953.svc wheezy_tcp@dns-test-service.dns-4953.svc wheezy_udp@_http._tcp.dns-test-service.dns-4953.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4953.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4953 jessie_tcp@dns-test-service.dns-4953 jessie_udp@dns-test-service.dns-4953.svc jessie_tcp@dns-test-service.dns-4953.svc jessie_udp@_http._tcp.dns-test-service.dns-4953.svc jessie_tcp@_http._tcp.dns-test-service.dns-4953.svc] Aug 10 23:43:24.728: INFO: DNS probes using dns-4953/dns-test-682b58e6-9707-46c4-bfba-409e3210973d succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:43:25.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4953" for this suite. • [SLOW TEST:37.477 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":303,"completed":63,"skipped":1051,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:43:25.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:43:25.980: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 10 23:43:25.997: INFO: Number of nodes with available pods: 0 Aug 10 23:43:25.997: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 10 23:43:26.074: INFO: Number of nodes with available pods: 0 Aug 10 23:43:26.074: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:27.078: INFO: Number of nodes with available pods: 0 Aug 10 23:43:27.078: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:28.156: INFO: Number of nodes with available pods: 0 Aug 10 23:43:28.156: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:29.079: INFO: Number of nodes with available pods: 0 Aug 10 23:43:29.079: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:30.088: INFO: Number of nodes with available pods: 1 Aug 10 23:43:30.088: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 10 23:43:30.119: INFO: Number of nodes with available pods: 1 Aug 10 23:43:30.119: INFO: Number of running nodes: 0, number of available pods: 1 Aug 10 23:43:31.121: INFO: Number of nodes with available pods: 0 Aug 10 23:43:31.121: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 10 23:43:31.162: INFO: Number of nodes with available pods: 0 Aug 10 23:43:31.162: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:32.192: INFO: Number of nodes with available pods: 0 Aug 10 23:43:32.192: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:33.166: INFO: Number of nodes with available pods: 0 Aug 10 23:43:33.166: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:34.166: INFO: Number of nodes with available pods: 0 Aug 10 23:43:34.166: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:35.166: INFO: Number of nodes with available pods: 0 Aug 10 23:43:35.166: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:36.166: INFO: Number of nodes with available pods: 0 Aug 10 23:43:36.166: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:37.166: INFO: Number of nodes with available pods: 0 Aug 10 23:43:37.166: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:38.166: INFO: Number of nodes with available pods: 0 Aug 10 23:43:38.166: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:39.166: INFO: Number of nodes with available pods: 0 Aug 10 23:43:39.166: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:40.166: INFO: Number of nodes with available pods: 0 Aug 10 23:43:40.166: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:41.166: INFO: Number of nodes with available pods: 0 Aug 10 23:43:41.166: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:42.166: INFO: Number of nodes with available pods: 0 Aug 10 23:43:42.166: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:43.166: INFO: Number of nodes with available pods: 0 Aug 10 23:43:43.166: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:44.167: INFO: Number of nodes with available pods: 0 Aug 10 23:43:44.167: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:45.165: INFO: Number of nodes with available pods: 0 Aug 10 23:43:45.165: INFO: Node latest-worker is running more than one daemon pod Aug 10 23:43:46.181: INFO: Number of nodes with available pods: 1 Aug 10 23:43:46.181: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5494, will wait for the garbage collector to delete the pods Aug 10 23:43:46.248: INFO: Deleting DaemonSet.extensions daemon-set took: 6.234227ms Aug 10 23:43:46.648: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.22974ms Aug 10 23:43:53.256: INFO: Number of nodes with available pods: 0 Aug 10 23:43:53.256: INFO: Number of running nodes: 0, number of available pods: 0 Aug 10 23:43:53.262: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5494/daemonsets","resourceVersion":"6040646"},"items":null} Aug 10 23:43:53.264: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5494/pods","resourceVersion":"6040646"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:43:53.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5494" for this suite. • [SLOW TEST:27.566 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":303,"completed":64,"skipped":1054,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:43:53.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 10 23:44:01.456: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 10 23:44:01.476: INFO: Pod pod-with-prestop-http-hook still exists Aug 10 23:44:03.476: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 10 23:44:03.479: INFO: Pod pod-with-prestop-http-hook still exists Aug 10 23:44:05.476: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 10 23:44:05.480: INFO: Pod pod-with-prestop-http-hook still exists Aug 10 23:44:07.476: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 10 23:44:07.516: INFO: Pod pod-with-prestop-http-hook still exists Aug 10 23:44:09.476: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 10 23:44:09.479: INFO: Pod pod-with-prestop-http-hook still exists Aug 10 23:44:11.476: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 10 23:44:11.490: INFO: Pod pod-with-prestop-http-hook still exists Aug 10 23:44:13.476: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 10 23:44:13.480: INFO: Pod pod-with-prestop-http-hook still exists Aug 10 23:44:15.476: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 10 23:44:15.480: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:44:15.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9757" for this suite. • [SLOW TEST:22.207 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":303,"completed":65,"skipped":1064,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:44:15.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Aug 10 23:44:15.648: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Aug 10 23:44:15.653: INFO: starting watch STEP: patching STEP: updating Aug 10 23:44:15.676: INFO: waiting for watch events with expected annotations Aug 10 23:44:15.676: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:44:15.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-7428" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":303,"completed":66,"skipped":1164,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:44:15.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1993.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1993.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1993.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1993.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1993.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1993.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 10 23:44:21.969: INFO: DNS probes using dns-1993/dns-test-73752cc5-06fb-4448-91de-444fbd402107 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:44:22.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1993" for this suite. • [SLOW TEST:6.317 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":303,"completed":67,"skipped":1177,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:44:22.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 10 23:44:22.501: INFO: Waiting up to 1m0s for all nodes to be ready Aug 10 23:45:22.521: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:45:22.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Aug 10 23:45:26.618: INFO: found a healthy node: latest-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:45:40.783: INFO: pods created so far: [1 1 1] Aug 10 23:45:40.783: INFO: length of pods created so far: 3 Aug 10 23:45:58.794: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:46:05.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-8737" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:46:05.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5641" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:103.843 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":303,"completed":68,"skipped":1193,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:46:05.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9615 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-9615 I0810 23:46:06.241247 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9615, replica count: 2 I0810 23:46:09.291714 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 23:46:12.291970 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 10 23:46:12.292: INFO: Creating new exec pod Aug 10 23:46:17.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9615 execpod29hpv -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 10 23:46:17.532: INFO: stderr: "I0810 23:46:17.461407 1364 log.go:181] (0xc000c113f0) (0xc0006b8c80) Create stream\nI0810 23:46:17.461481 1364 log.go:181] (0xc000c113f0) (0xc0006b8c80) Stream added, broadcasting: 1\nI0810 23:46:17.464304 1364 log.go:181] (0xc000c113f0) Reply frame received for 1\nI0810 23:46:17.464361 1364 log.go:181] (0xc000c113f0) (0xc0008580a0) Create stream\nI0810 23:46:17.464385 1364 log.go:181] (0xc000c113f0) (0xc0008580a0) Stream added, broadcasting: 3\nI0810 23:46:17.465717 1364 log.go:181] (0xc000c113f0) Reply frame received for 3\nI0810 23:46:17.465835 1364 log.go:181] (0xc000c113f0) (0xc000b2b5e0) Create stream\nI0810 23:46:17.465857 1364 log.go:181] (0xc000c113f0) (0xc000b2b5e0) Stream added, broadcasting: 5\nI0810 23:46:17.466982 1364 log.go:181] (0xc000c113f0) Reply frame received for 5\nI0810 23:46:17.523714 1364 log.go:181] (0xc000c113f0) Data frame received for 5\nI0810 23:46:17.523762 1364 log.go:181] (0xc000b2b5e0) (5) Data frame handling\nI0810 23:46:17.523797 1364 log.go:181] (0xc000b2b5e0) (5) Data frame sent\nI0810 23:46:17.523823 1364 log.go:181] (0xc000c113f0) Data frame received for 5\nI0810 23:46:17.523837 1364 log.go:181] (0xc000b2b5e0) (5) Data frame handling\nI0810 23:46:17.523856 1364 log.go:181] (0xc000c113f0) Data frame received for 3\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0810 23:46:17.523871 1364 log.go:181] (0xc0008580a0) (3) Data frame handling\nI0810 23:46:17.523969 1364 log.go:181] (0xc000b2b5e0) (5) Data frame sent\nI0810 23:46:17.524672 1364 log.go:181] (0xc000c113f0) Data frame received for 5\nI0810 23:46:17.524703 1364 log.go:181] (0xc000b2b5e0) (5) Data frame handling\nI0810 23:46:17.526546 1364 log.go:181] (0xc000c113f0) Data frame received for 1\nI0810 23:46:17.526585 1364 log.go:181] (0xc0006b8c80) (1) Data frame handling\nI0810 23:46:17.526620 1364 log.go:181] (0xc0006b8c80) (1) Data frame sent\nI0810 23:46:17.526650 1364 log.go:181] (0xc000c113f0) (0xc0006b8c80) Stream removed, broadcasting: 1\nI0810 23:46:17.526817 1364 log.go:181] (0xc000c113f0) Go away received\nI0810 23:46:17.527165 1364 log.go:181] (0xc000c113f0) (0xc0006b8c80) Stream removed, broadcasting: 1\nI0810 23:46:17.527195 1364 log.go:181] (0xc000c113f0) (0xc0008580a0) Stream removed, broadcasting: 3\nI0810 23:46:17.527224 1364 log.go:181] (0xc000c113f0) (0xc000b2b5e0) Stream removed, broadcasting: 5\n" Aug 10 23:46:17.532: INFO: stdout: "" Aug 10 23:46:17.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9615 execpod29hpv -- /bin/sh -x -c nc -zv -t -w 2 10.99.99.238 80' Aug 10 23:46:17.753: INFO: stderr: "I0810 23:46:17.672316 1382 log.go:181] (0xc0006c60b0) (0xc000b0fa40) Create stream\nI0810 23:46:17.672370 1382 log.go:181] (0xc0006c60b0) (0xc000b0fa40) Stream added, broadcasting: 1\nI0810 23:46:17.674477 1382 log.go:181] (0xc0006c60b0) Reply frame received for 1\nI0810 23:46:17.674543 1382 log.go:181] (0xc0006c60b0) (0xc000b0fcc0) Create stream\nI0810 23:46:17.674572 1382 log.go:181] (0xc0006c60b0) (0xc000b0fcc0) Stream added, broadcasting: 3\nI0810 23:46:17.675463 1382 log.go:181] (0xc0006c60b0) Reply frame received for 3\nI0810 23:46:17.675529 1382 log.go:181] (0xc0006c60b0) (0xc0008826e0) Create stream\nI0810 23:46:17.675562 1382 log.go:181] (0xc0006c60b0) (0xc0008826e0) Stream added, broadcasting: 5\nI0810 23:46:17.676440 1382 log.go:181] (0xc0006c60b0) Reply frame received for 5\nI0810 23:46:17.745213 1382 log.go:181] (0xc0006c60b0) Data frame received for 3\nI0810 23:46:17.745238 1382 log.go:181] (0xc000b0fcc0) (3) Data frame handling\nI0810 23:46:17.745254 1382 log.go:181] (0xc0006c60b0) Data frame received for 5\nI0810 23:46:17.745263 1382 log.go:181] (0xc0008826e0) (5) Data frame handling\nI0810 23:46:17.745273 1382 log.go:181] (0xc0008826e0) (5) Data frame sent\nI0810 23:46:17.745280 1382 log.go:181] (0xc0006c60b0) Data frame received for 5\nI0810 23:46:17.745285 1382 log.go:181] (0xc0008826e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.99.238 80\nConnection to 10.99.99.238 80 port [tcp/http] succeeded!\nI0810 23:46:17.746600 1382 log.go:181] (0xc0006c60b0) Data frame received for 1\nI0810 23:46:17.746632 1382 log.go:181] (0xc000b0fa40) (1) Data frame handling\nI0810 23:46:17.746653 1382 log.go:181] (0xc000b0fa40) (1) Data frame sent\nI0810 23:46:17.746676 1382 log.go:181] (0xc0006c60b0) (0xc000b0fa40) Stream removed, broadcasting: 1\nI0810 23:46:17.746713 1382 log.go:181] (0xc0006c60b0) Go away received\nI0810 23:46:17.747194 1382 log.go:181] (0xc0006c60b0) (0xc000b0fa40) Stream removed, broadcasting: 1\nI0810 23:46:17.747215 1382 log.go:181] (0xc0006c60b0) (0xc000b0fcc0) Stream removed, broadcasting: 3\nI0810 23:46:17.747231 1382 log.go:181] (0xc0006c60b0) (0xc0008826e0) Stream removed, broadcasting: 5\n" Aug 10 23:46:17.753: INFO: stdout: "" Aug 10 23:46:17.753: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9615 execpod29hpv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30867' Aug 10 23:46:17.989: INFO: stderr: "I0810 23:46:17.900587 1400 log.go:181] (0xc000654fd0) (0xc0009797c0) Create stream\nI0810 23:46:17.900647 1400 log.go:181] (0xc000654fd0) (0xc0009797c0) Stream added, broadcasting: 1\nI0810 23:46:17.907148 1400 log.go:181] (0xc000654fd0) Reply frame received for 1\nI0810 23:46:17.907190 1400 log.go:181] (0xc000654fd0) (0xc0008fd0e0) Create stream\nI0810 23:46:17.907201 1400 log.go:181] (0xc000654fd0) (0xc0008fd0e0) Stream added, broadcasting: 3\nI0810 23:46:17.908135 1400 log.go:181] (0xc000654fd0) Reply frame received for 3\nI0810 23:46:17.908170 1400 log.go:181] (0xc000654fd0) (0xc0008b0640) Create stream\nI0810 23:46:17.908184 1400 log.go:181] (0xc000654fd0) (0xc0008b0640) Stream added, broadcasting: 5\nI0810 23:46:17.909070 1400 log.go:181] (0xc000654fd0) Reply frame received for 5\nI0810 23:46:17.980073 1400 log.go:181] (0xc000654fd0) Data frame received for 5\nI0810 23:46:17.980093 1400 log.go:181] (0xc0008b0640) (5) Data frame handling\nI0810 23:46:17.980111 1400 log.go:181] (0xc0008b0640) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 30867\nConnection to 172.18.0.14 30867 port [tcp/30867] succeeded!\nI0810 23:46:17.980353 1400 log.go:181] (0xc000654fd0) Data frame received for 3\nI0810 23:46:17.980369 1400 log.go:181] (0xc0008fd0e0) (3) Data frame handling\nI0810 23:46:17.980539 1400 log.go:181] (0xc000654fd0) Data frame received for 5\nI0810 23:46:17.980559 1400 log.go:181] (0xc0008b0640) (5) Data frame handling\nI0810 23:46:17.982629 1400 log.go:181] (0xc000654fd0) Data frame received for 1\nI0810 23:46:17.982665 1400 log.go:181] (0xc0009797c0) (1) Data frame handling\nI0810 23:46:17.982687 1400 log.go:181] (0xc0009797c0) (1) Data frame sent\nI0810 23:46:17.982711 1400 log.go:181] (0xc000654fd0) (0xc0009797c0) Stream removed, broadcasting: 1\nI0810 23:46:17.982799 1400 log.go:181] (0xc000654fd0) Go away received\nI0810 23:46:17.983057 1400 log.go:181] (0xc000654fd0) (0xc0009797c0) Stream removed, broadcasting: 1\nI0810 23:46:17.983075 1400 log.go:181] (0xc000654fd0) (0xc0008fd0e0) Stream removed, broadcasting: 3\nI0810 23:46:17.983090 1400 log.go:181] (0xc000654fd0) (0xc0008b0640) Stream removed, broadcasting: 5\n" Aug 10 23:46:17.990: INFO: stdout: "" Aug 10 23:46:17.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9615 execpod29hpv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30867' Aug 10 23:46:18.211: INFO: stderr: "I0810 23:46:18.136885 1418 log.go:181] (0xc00003b340) (0xc0009403c0) Create stream\nI0810 23:46:18.136962 1418 log.go:181] (0xc00003b340) (0xc0009403c0) Stream added, broadcasting: 1\nI0810 23:46:18.146233 1418 log.go:181] (0xc00003b340) Reply frame received for 1\nI0810 23:46:18.146286 1418 log.go:181] (0xc00003b340) (0xc0007e4500) Create stream\nI0810 23:46:18.146297 1418 log.go:181] (0xc00003b340) (0xc0007e4500) Stream added, broadcasting: 3\nI0810 23:46:18.147069 1418 log.go:181] (0xc00003b340) Reply frame received for 3\nI0810 23:46:18.147094 1418 log.go:181] (0xc00003b340) (0xc00061d360) Create stream\nI0810 23:46:18.147106 1418 log.go:181] (0xc00003b340) (0xc00061d360) Stream added, broadcasting: 5\nI0810 23:46:18.147726 1418 log.go:181] (0xc00003b340) Reply frame received for 5\nI0810 23:46:18.204132 1418 log.go:181] (0xc00003b340) Data frame received for 5\nI0810 23:46:18.204180 1418 log.go:181] (0xc00061d360) (5) Data frame handling\nI0810 23:46:18.204196 1418 log.go:181] (0xc00061d360) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 30867\nConnection to 172.18.0.12 30867 port [tcp/30867] succeeded!\nI0810 23:46:18.204235 1418 log.go:181] (0xc00003b340) Data frame received for 3\nI0810 23:46:18.204294 1418 log.go:181] (0xc0007e4500) (3) Data frame handling\nI0810 23:46:18.204347 1418 log.go:181] (0xc00003b340) Data frame received for 5\nI0810 23:46:18.204361 1418 log.go:181] (0xc00061d360) (5) Data frame handling\nI0810 23:46:18.205524 1418 log.go:181] (0xc00003b340) Data frame received for 1\nI0810 23:46:18.205590 1418 log.go:181] (0xc0009403c0) (1) Data frame handling\nI0810 23:46:18.205637 1418 log.go:181] (0xc0009403c0) (1) Data frame sent\nI0810 23:46:18.205688 1418 log.go:181] (0xc00003b340) (0xc0009403c0) Stream removed, broadcasting: 1\nI0810 23:46:18.205714 1418 log.go:181] (0xc00003b340) Go away received\nI0810 23:46:18.206119 1418 log.go:181] (0xc00003b340) (0xc0009403c0) Stream removed, broadcasting: 1\nI0810 23:46:18.206144 1418 log.go:181] (0xc00003b340) (0xc0007e4500) Stream removed, broadcasting: 3\nI0810 23:46:18.206156 1418 log.go:181] (0xc00003b340) (0xc00061d360) Stream removed, broadcasting: 5\n" Aug 10 23:46:18.211: INFO: stdout: "" Aug 10 23:46:18.211: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:46:18.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9615" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.368 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":303,"completed":69,"skipped":1202,"failed":0} S ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:46:18.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Aug 10 23:46:18.369: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:46:18.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-831" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":303,"completed":70,"skipped":1203,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:46:18.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-8f7f5fe9-ed3f-4c24-a512-0cda67c093c3 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:46:18.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1219" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":303,"completed":71,"skipped":1207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:46:18.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:46:18.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7663" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":303,"completed":72,"skipped":1235,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:46:18.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Aug 10 23:46:18.761: INFO: Waiting up to 5m0s for pod "client-containers-fc61e45d-943f-4a75-b1de-3ad25a86cc6c" in namespace "containers-9006" to be "Succeeded or Failed" Aug 10 23:46:18.774: INFO: Pod "client-containers-fc61e45d-943f-4a75-b1de-3ad25a86cc6c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.916541ms Aug 10 23:46:20.778: INFO: Pod "client-containers-fc61e45d-943f-4a75-b1de-3ad25a86cc6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017677447s Aug 10 23:46:22.782: INFO: Pod "client-containers-fc61e45d-943f-4a75-b1de-3ad25a86cc6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021467723s STEP: Saw pod success Aug 10 23:46:22.782: INFO: Pod "client-containers-fc61e45d-943f-4a75-b1de-3ad25a86cc6c" satisfied condition "Succeeded or Failed" Aug 10 23:46:22.784: INFO: Trying to get logs from node latest-worker pod client-containers-fc61e45d-943f-4a75-b1de-3ad25a86cc6c container test-container: STEP: delete the pod Aug 10 23:46:22.815: INFO: Waiting for pod client-containers-fc61e45d-943f-4a75-b1de-3ad25a86cc6c to disappear Aug 10 23:46:22.834: INFO: Pod client-containers-fc61e45d-943f-4a75-b1de-3ad25a86cc6c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:46:22.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9006" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":303,"completed":73,"skipped":1247,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:46:22.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:46:22.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Aug 10 23:46:23.578: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-10T23:46:23Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-10T23:46:23Z]] name:name1 resourceVersion:6041490 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:59633c5b-7422-4f99-9cfd-5466aba800a1] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Aug 10 23:46:33.596: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-10T23:46:33Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-10T23:46:33Z]] name:name2 resourceVersion:6041560 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:77c336f1-1403-4e69-a6f5-6611e1b9dcff] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Aug 10 23:46:43.603: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-10T23:46:23Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-10T23:46:43Z]] name:name1 resourceVersion:6041589 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:59633c5b-7422-4f99-9cfd-5466aba800a1] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Aug 10 23:46:53.610: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-10T23:46:33Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-10T23:46:53Z]] name:name2 resourceVersion:6041619 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:77c336f1-1403-4e69-a6f5-6611e1b9dcff] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Aug 10 23:47:03.619: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-10T23:46:23Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-10T23:46:43Z]] name:name1 resourceVersion:6041647 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:59633c5b-7422-4f99-9cfd-5466aba800a1] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Aug 10 23:47:13.628: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-10T23:46:33Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-10T23:46:53Z]] name:name2 resourceVersion:6041677 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:77c336f1-1403-4e69-a6f5-6611e1b9dcff] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:47:24.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-4684" for this suite. • [SLOW TEST:61.307 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":303,"completed":74,"skipped":1248,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:47:24.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7591 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 10 23:47:24.282: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 10 23:47:24.365: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 10 23:47:26.622: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 10 23:47:28.369: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 10 23:47:30.369: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 23:47:32.369: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 23:47:34.371: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 23:47:36.369: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 23:47:38.369: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 23:47:40.369: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 23:47:42.370: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 23:47:44.369: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 23:47:46.369: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 10 23:47:46.375: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 10 23:47:50.409: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.4:8080/dial?request=hostname&protocol=http&host=10.244.1.168&port=8080&tries=1'] Namespace:pod-network-test-7591 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 23:47:50.409: INFO: >>> kubeConfig: /root/.kube/config I0810 23:47:50.448040 7 log.go:181] (0xc002ff0420) (0xc0023a2c80) Create stream I0810 23:47:50.448082 7 log.go:181] (0xc002ff0420) (0xc0023a2c80) Stream added, broadcasting: 1 I0810 23:47:50.450332 7 log.go:181] (0xc002ff0420) Reply frame received for 1 I0810 23:47:50.450403 7 log.go:181] (0xc002ff0420) (0xc002e30960) Create stream I0810 23:47:50.450423 7 log.go:181] (0xc002ff0420) (0xc002e30960) Stream added, broadcasting: 3 I0810 23:47:50.451394 7 log.go:181] (0xc002ff0420) Reply frame received for 3 I0810 23:47:50.451442 7 log.go:181] (0xc002ff0420) (0xc002e30a00) Create stream I0810 23:47:50.451459 7 log.go:181] (0xc002ff0420) (0xc002e30a00) Stream added, broadcasting: 5 I0810 23:47:50.452465 7 log.go:181] (0xc002ff0420) Reply frame received for 5 I0810 23:47:50.546421 7 log.go:181] (0xc002ff0420) Data frame received for 3 I0810 23:47:50.546445 7 log.go:181] (0xc002e30960) (3) Data frame handling I0810 23:47:50.546466 7 log.go:181] (0xc002e30960) (3) Data frame sent I0810 23:47:50.547257 7 log.go:181] (0xc002ff0420) Data frame received for 3 I0810 23:47:50.547305 7 log.go:181] (0xc002e30960) (3) Data frame handling I0810 23:47:50.547335 7 log.go:181] (0xc002ff0420) Data frame received for 5 I0810 23:47:50.547355 7 log.go:181] (0xc002e30a00) (5) Data frame handling I0810 23:47:50.548875 7 log.go:181] (0xc002ff0420) Data frame received for 1 I0810 23:47:50.548891 7 log.go:181] (0xc0023a2c80) (1) Data frame handling I0810 23:47:50.548904 7 log.go:181] (0xc0023a2c80) (1) Data frame sent I0810 23:47:50.549026 7 log.go:181] (0xc002ff0420) (0xc0023a2c80) Stream removed, broadcasting: 1 I0810 23:47:50.549113 7 log.go:181] (0xc002ff0420) (0xc0023a2c80) Stream removed, broadcasting: 1 I0810 23:47:50.549136 7 log.go:181] (0xc002ff0420) (0xc002e30960) Stream removed, broadcasting: 3 I0810 23:47:50.549145 7 log.go:181] (0xc002ff0420) (0xc002e30a00) Stream removed, broadcasting: 5 Aug 10 23:47:50.549: INFO: Waiting for responses: map[] I0810 23:47:50.549215 7 log.go:181] (0xc002ff0420) Go away received Aug 10 23:47:50.590: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.4:8080/dial?request=hostname&protocol=http&host=10.244.2.3&port=8080&tries=1'] Namespace:pod-network-test-7591 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 23:47:50.590: INFO: >>> kubeConfig: /root/.kube/config I0810 23:47:50.617227 7 log.go:181] (0xc00107c6e0) (0xc001be55e0) Create stream I0810 23:47:50.617264 7 log.go:181] (0xc00107c6e0) (0xc001be55e0) Stream added, broadcasting: 1 I0810 23:47:50.620644 7 log.go:181] (0xc00107c6e0) Reply frame received for 1 I0810 23:47:50.620709 7 log.go:181] (0xc00107c6e0) (0xc0071468c0) Create stream I0810 23:47:50.620818 7 log.go:181] (0xc00107c6e0) (0xc0071468c0) Stream added, broadcasting: 3 I0810 23:47:50.623484 7 log.go:181] (0xc00107c6e0) Reply frame received for 3 I0810 23:47:50.623533 7 log.go:181] (0xc00107c6e0) (0xc0023a2d20) Create stream I0810 23:47:50.623544 7 log.go:181] (0xc00107c6e0) (0xc0023a2d20) Stream added, broadcasting: 5 I0810 23:47:50.624351 7 log.go:181] (0xc00107c6e0) Reply frame received for 5 I0810 23:47:50.697992 7 log.go:181] (0xc00107c6e0) Data frame received for 3 I0810 23:47:50.698019 7 log.go:181] (0xc0071468c0) (3) Data frame handling I0810 23:47:50.698031 7 log.go:181] (0xc0071468c0) (3) Data frame sent I0810 23:47:50.698913 7 log.go:181] (0xc00107c6e0) Data frame received for 5 I0810 23:47:50.698929 7 log.go:181] (0xc0023a2d20) (5) Data frame handling I0810 23:47:50.698952 7 log.go:181] (0xc00107c6e0) Data frame received for 3 I0810 23:47:50.698970 7 log.go:181] (0xc0071468c0) (3) Data frame handling I0810 23:47:50.701206 7 log.go:181] (0xc00107c6e0) Data frame received for 1 I0810 23:47:50.701227 7 log.go:181] (0xc001be55e0) (1) Data frame handling I0810 23:47:50.701238 7 log.go:181] (0xc001be55e0) (1) Data frame sent I0810 23:47:50.701337 7 log.go:181] (0xc00107c6e0) (0xc001be55e0) Stream removed, broadcasting: 1 I0810 23:47:50.701414 7 log.go:181] (0xc00107c6e0) (0xc001be55e0) Stream removed, broadcasting: 1 I0810 23:47:50.701440 7 log.go:181] (0xc00107c6e0) (0xc0071468c0) Stream removed, broadcasting: 3 I0810 23:47:50.701565 7 log.go:181] (0xc00107c6e0) (0xc0023a2d20) Stream removed, broadcasting: 5 I0810 23:47:50.701597 7 log.go:181] (0xc00107c6e0) Go away received Aug 10 23:47:50.701: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:47:50.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7591" for this suite. • [SLOW TEST:26.562 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":303,"completed":75,"skipped":1249,"failed":0} [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:47:50.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:47:50.816: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-5175 I0810 23:47:50.827206 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5175, replica count: 1 I0810 23:47:51.877648 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 23:47:52.877851 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 23:47:53.878077 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 23:47:54.878354 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 10 23:47:55.058: INFO: Created: latency-svc-sh6hh Aug 10 23:47:55.068: INFO: Got endpoints: latency-svc-sh6hh [89.73452ms] Aug 10 23:47:55.122: INFO: Created: latency-svc-j2c8k Aug 10 23:47:55.138: INFO: Got endpoints: latency-svc-j2c8k [70.316287ms] Aug 10 23:47:55.196: INFO: Created: latency-svc-qk7lr Aug 10 23:47:55.211: INFO: Got endpoints: latency-svc-qk7lr [143.058428ms] Aug 10 23:47:55.256: INFO: Created: latency-svc-9m2n2 Aug 10 23:47:55.264: INFO: Got endpoints: latency-svc-9m2n2 [196.33755ms] Aug 10 23:47:55.283: INFO: Created: latency-svc-zsc8f Aug 10 23:47:55.294: INFO: Got endpoints: latency-svc-zsc8f [226.079709ms] Aug 10 23:47:55.340: INFO: Created: latency-svc-skjrs Aug 10 23:47:55.354: INFO: Got endpoints: latency-svc-skjrs [286.254838ms] Aug 10 23:47:55.392: INFO: Created: latency-svc-nfvf7 Aug 10 23:47:55.490: INFO: Got endpoints: latency-svc-nfvf7 [421.577012ms] Aug 10 23:47:55.494: INFO: Created: latency-svc-c6s8r Aug 10 23:47:55.505: INFO: Got endpoints: latency-svc-c6s8r [436.299205ms] Aug 10 23:47:55.522: INFO: Created: latency-svc-t6jwp Aug 10 23:47:55.535: INFO: Got endpoints: latency-svc-t6jwp [466.937772ms] Aug 10 23:47:55.552: INFO: Created: latency-svc-45rfm Aug 10 23:47:55.566: INFO: Got endpoints: latency-svc-45rfm [497.338062ms] Aug 10 23:47:55.627: INFO: Created: latency-svc-x8q62 Aug 10 23:47:55.630: INFO: Got endpoints: latency-svc-x8q62 [561.849996ms] Aug 10 23:47:55.674: INFO: Created: latency-svc-ffb2l Aug 10 23:47:55.685: INFO: Got endpoints: latency-svc-ffb2l [616.992346ms] Aug 10 23:47:55.709: INFO: Created: latency-svc-hplg9 Aug 10 23:47:55.726: INFO: Got endpoints: latency-svc-hplg9 [657.880991ms] Aug 10 23:47:55.783: INFO: Created: latency-svc-n8b98 Aug 10 23:47:55.830: INFO: Got endpoints: latency-svc-n8b98 [761.163712ms] Aug 10 23:47:55.866: INFO: Created: latency-svc-brp4r Aug 10 23:47:55.880: INFO: Got endpoints: latency-svc-brp4r [811.390174ms] Aug 10 23:47:55.949: INFO: Created: latency-svc-5hlth Aug 10 23:47:55.991: INFO: Got endpoints: latency-svc-5hlth [922.260993ms] Aug 10 23:47:56.060: INFO: Created: latency-svc-vfqqt Aug 10 23:47:56.067: INFO: Got endpoints: latency-svc-vfqqt [928.328958ms] Aug 10 23:47:56.094: INFO: Created: latency-svc-jdwmd Aug 10 23:47:56.102: INFO: Got endpoints: latency-svc-jdwmd [890.770259ms] Aug 10 23:47:56.136: INFO: Created: latency-svc-5zq7k Aug 10 23:47:56.151: INFO: Got endpoints: latency-svc-5zq7k [886.39039ms] Aug 10 23:47:56.202: INFO: Created: latency-svc-fq6j9 Aug 10 23:47:56.206: INFO: Got endpoints: latency-svc-fq6j9 [912.181606ms] Aug 10 23:47:56.231: INFO: Created: latency-svc-xf28p Aug 10 23:47:56.248: INFO: Got endpoints: latency-svc-xf28p [893.225942ms] Aug 10 23:47:56.268: INFO: Created: latency-svc-g4vqh Aug 10 23:47:56.285: INFO: Got endpoints: latency-svc-g4vqh [794.865642ms] Aug 10 23:47:56.352: INFO: Created: latency-svc-2mwh7 Aug 10 23:47:56.369: INFO: Got endpoints: latency-svc-2mwh7 [864.775528ms] Aug 10 23:47:56.398: INFO: Created: latency-svc-bqpj9 Aug 10 23:47:56.422: INFO: Got endpoints: latency-svc-bqpj9 [887.341657ms] Aug 10 23:47:56.694: INFO: Created: latency-svc-6dkf7 Aug 10 23:47:56.790: INFO: Got endpoints: latency-svc-6dkf7 [1.224330549s] Aug 10 23:47:56.790: INFO: Created: latency-svc-jqxjj Aug 10 23:47:56.881: INFO: Got endpoints: latency-svc-jqxjj [1.250367523s] Aug 10 23:47:56.934: INFO: Created: latency-svc-7b6lp Aug 10 23:47:57.190: INFO: Got endpoints: latency-svc-7b6lp [1.504479396s] Aug 10 23:47:57.511: INFO: Created: latency-svc-7zn5k Aug 10 23:47:57.688: INFO: Got endpoints: latency-svc-7zn5k [1.961966193s] Aug 10 23:47:58.026: INFO: Created: latency-svc-8sn4q Aug 10 23:47:58.102: INFO: Got endpoints: latency-svc-8sn4q [2.271838943s] Aug 10 23:47:58.256: INFO: Created: latency-svc-sjsln Aug 10 23:47:58.306: INFO: Got endpoints: latency-svc-sjsln [2.426005376s] Aug 10 23:47:58.484: INFO: Created: latency-svc-z2m6l Aug 10 23:47:58.539: INFO: Got endpoints: latency-svc-z2m6l [2.548272495s] Aug 10 23:47:58.693: INFO: Created: latency-svc-s77cc Aug 10 23:47:58.723: INFO: Got endpoints: latency-svc-s77cc [2.655857556s] Aug 10 23:47:58.766: INFO: Created: latency-svc-9qscz Aug 10 23:47:58.779: INFO: Got endpoints: latency-svc-9qscz [2.677163448s] Aug 10 23:47:58.879: INFO: Created: latency-svc-j9wst Aug 10 23:47:58.918: INFO: Got endpoints: latency-svc-j9wst [2.766749376s] Aug 10 23:47:59.023: INFO: Created: latency-svc-lh657 Aug 10 23:47:59.043: INFO: Got endpoints: latency-svc-lh657 [2.836830802s] Aug 10 23:47:59.065: INFO: Created: latency-svc-9qh72 Aug 10 23:47:59.080: INFO: Got endpoints: latency-svc-9qh72 [2.83234739s] Aug 10 23:47:59.101: INFO: Created: latency-svc-2jg9r Aug 10 23:47:59.119: INFO: Got endpoints: latency-svc-2jg9r [2.834026698s] Aug 10 23:47:59.195: INFO: Created: latency-svc-7n895 Aug 10 23:47:59.220: INFO: Created: latency-svc-8ts27 Aug 10 23:47:59.220: INFO: Got endpoints: latency-svc-7n895 [2.850879723s] Aug 10 23:47:59.237: INFO: Got endpoints: latency-svc-8ts27 [2.815088183s] Aug 10 23:47:59.257: INFO: Created: latency-svc-652sb Aug 10 23:47:59.272: INFO: Got endpoints: latency-svc-652sb [2.481840888s] Aug 10 23:47:59.344: INFO: Created: latency-svc-8gdkg Aug 10 23:47:59.356: INFO: Got endpoints: latency-svc-8gdkg [2.475110812s] Aug 10 23:47:59.382: INFO: Created: latency-svc-7rmnl Aug 10 23:47:59.392: INFO: Got endpoints: latency-svc-7rmnl [2.202401585s] Aug 10 23:47:59.418: INFO: Created: latency-svc-8r487 Aug 10 23:47:59.460: INFO: Got endpoints: latency-svc-8r487 [1.77103296s] Aug 10 23:47:59.529: INFO: Created: latency-svc-fcprt Aug 10 23:47:59.558: INFO: Got endpoints: latency-svc-fcprt [1.456036234s] Aug 10 23:47:59.664: INFO: Created: latency-svc-g6npc Aug 10 23:47:59.700: INFO: Got endpoints: latency-svc-g6npc [1.394018506s] Aug 10 23:47:59.814: INFO: Created: latency-svc-m8lql Aug 10 23:47:59.831: INFO: Got endpoints: latency-svc-m8lql [1.292325979s] Aug 10 23:47:59.945: INFO: Created: latency-svc-fdz5h Aug 10 23:47:59.951: INFO: Got endpoints: latency-svc-fdz5h [1.228687394s] Aug 10 23:48:00.007: INFO: Created: latency-svc-hv6m9 Aug 10 23:48:00.024: INFO: Got endpoints: latency-svc-hv6m9 [1.244971522s] Aug 10 23:48:00.043: INFO: Created: latency-svc-dtqsc Aug 10 23:48:00.094: INFO: Got endpoints: latency-svc-dtqsc [1.176397741s] Aug 10 23:48:00.126: INFO: Created: latency-svc-xs47k Aug 10 23:48:00.153: INFO: Got endpoints: latency-svc-xs47k [1.109375284s] Aug 10 23:48:00.239: INFO: Created: latency-svc-7dvx7 Aug 10 23:48:00.253: INFO: Got endpoints: latency-svc-7dvx7 [1.172888465s] Aug 10 23:48:00.271: INFO: Created: latency-svc-shgqk Aug 10 23:48:00.283: INFO: Got endpoints: latency-svc-shgqk [1.164316384s] Aug 10 23:48:00.376: INFO: Created: latency-svc-gqbdt Aug 10 23:48:00.408: INFO: Got endpoints: latency-svc-gqbdt [1.187380114s] Aug 10 23:48:00.439: INFO: Created: latency-svc-shr5w Aug 10 23:48:00.452: INFO: Got endpoints: latency-svc-shr5w [1.214798381s] Aug 10 23:48:00.523: INFO: Created: latency-svc-k6q5t Aug 10 23:48:00.536: INFO: Got endpoints: latency-svc-k6q5t [1.263793328s] Aug 10 23:48:00.582: INFO: Created: latency-svc-hrhgl Aug 10 23:48:00.597: INFO: Got endpoints: latency-svc-hrhgl [1.241187949s] Aug 10 23:48:00.671: INFO: Created: latency-svc-sf6t9 Aug 10 23:48:00.687: INFO: Got endpoints: latency-svc-sf6t9 [1.294791796s] Aug 10 23:48:00.739: INFO: Created: latency-svc-jj7n9 Aug 10 23:48:00.747: INFO: Got endpoints: latency-svc-jj7n9 [1.287619876s] Aug 10 23:48:00.807: INFO: Created: latency-svc-bwzvh Aug 10 23:48:00.811: INFO: Got endpoints: latency-svc-bwzvh [1.253647712s] Aug 10 23:48:00.845: INFO: Created: latency-svc-qd9k2 Aug 10 23:48:00.875: INFO: Got endpoints: latency-svc-qd9k2 [1.175311327s] Aug 10 23:48:00.957: INFO: Created: latency-svc-47rxs Aug 10 23:48:01.015: INFO: Got endpoints: latency-svc-47rxs [1.18355271s] Aug 10 23:48:01.101: INFO: Created: latency-svc-4sgxz Aug 10 23:48:01.104: INFO: Got endpoints: latency-svc-4sgxz [1.152461499s] Aug 10 23:48:01.139: INFO: Created: latency-svc-tsww6 Aug 10 23:48:01.157: INFO: Got endpoints: latency-svc-tsww6 [1.132225073s] Aug 10 23:48:01.175: INFO: Created: latency-svc-qk955 Aug 10 23:48:01.187: INFO: Got endpoints: latency-svc-qk955 [1.092432237s] Aug 10 23:48:01.257: INFO: Created: latency-svc-hwnbz Aug 10 23:48:01.261: INFO: Got endpoints: latency-svc-hwnbz [1.108225281s] Aug 10 23:48:01.289: INFO: Created: latency-svc-7s748 Aug 10 23:48:01.313: INFO: Got endpoints: latency-svc-7s748 [1.059711646s] Aug 10 23:48:01.343: INFO: Created: latency-svc-zh4jd Aug 10 23:48:01.400: INFO: Got endpoints: latency-svc-zh4jd [1.116432635s] Aug 10 23:48:01.447: INFO: Created: latency-svc-8whqr Aug 10 23:48:01.464: INFO: Got endpoints: latency-svc-8whqr [1.056425637s] Aug 10 23:48:01.556: INFO: Created: latency-svc-th7dg Aug 10 23:48:01.590: INFO: Got endpoints: latency-svc-th7dg [1.137605313s] Aug 10 23:48:01.590: INFO: Created: latency-svc-hpltw Aug 10 23:48:01.627: INFO: Got endpoints: latency-svc-hpltw [1.09100984s] Aug 10 23:48:01.732: INFO: Created: latency-svc-5sfwq Aug 10 23:48:01.747: INFO: Got endpoints: latency-svc-5sfwq [1.149716309s] Aug 10 23:48:01.787: INFO: Created: latency-svc-d99ld Aug 10 23:48:01.873: INFO: Got endpoints: latency-svc-d99ld [1.185615611s] Aug 10 23:48:01.884: INFO: Created: latency-svc-gvjh6 Aug 10 23:48:01.903: INFO: Got endpoints: latency-svc-gvjh6 [1.155857183s] Aug 10 23:48:01.919: INFO: Created: latency-svc-zxhvc Aug 10 23:48:01.933: INFO: Got endpoints: latency-svc-zxhvc [1.12187597s] Aug 10 23:48:01.969: INFO: Created: latency-svc-mcr9f Aug 10 23:48:02.016: INFO: Got endpoints: latency-svc-mcr9f [1.140752066s] Aug 10 23:48:02.020: INFO: Created: latency-svc-xdsbs Aug 10 23:48:02.035: INFO: Got endpoints: latency-svc-xdsbs [1.020036581s] Aug 10 23:48:02.052: INFO: Created: latency-svc-9t9rb Aug 10 23:48:02.077: INFO: Got endpoints: latency-svc-9t9rb [973.351519ms] Aug 10 23:48:02.101: INFO: Created: latency-svc-wk7gz Aug 10 23:48:02.116: INFO: Got endpoints: latency-svc-wk7gz [959.298922ms] Aug 10 23:48:02.178: INFO: Created: latency-svc-r2rg2 Aug 10 23:48:02.192: INFO: Got endpoints: latency-svc-r2rg2 [1.005673888s] Aug 10 23:48:02.244: INFO: Created: latency-svc-ms5mw Aug 10 23:48:02.259: INFO: Got endpoints: latency-svc-ms5mw [997.663668ms] Aug 10 23:48:02.346: INFO: Created: latency-svc-4m6zf Aug 10 23:48:02.359: INFO: Got endpoints: latency-svc-4m6zf [1.045916123s] Aug 10 23:48:02.395: INFO: Created: latency-svc-797jg Aug 10 23:48:02.421: INFO: Got endpoints: latency-svc-797jg [1.02127665s] Aug 10 23:48:02.485: INFO: Created: latency-svc-7jgg8 Aug 10 23:48:02.499: INFO: Got endpoints: latency-svc-7jgg8 [1.034384702s] Aug 10 23:48:02.519: INFO: Created: latency-svc-9zc9w Aug 10 23:48:02.529: INFO: Got endpoints: latency-svc-9zc9w [939.133672ms] Aug 10 23:48:02.550: INFO: Created: latency-svc-6xtrr Aug 10 23:48:02.580: INFO: Got endpoints: latency-svc-6xtrr [953.449366ms] Aug 10 23:48:02.663: INFO: Created: latency-svc-zjjfv Aug 10 23:48:02.680: INFO: Got endpoints: latency-svc-zjjfv [932.92703ms] Aug 10 23:48:02.711: INFO: Created: latency-svc-d7lth Aug 10 23:48:02.722: INFO: Got endpoints: latency-svc-d7lth [849.467592ms] Aug 10 23:48:02.741: INFO: Created: latency-svc-8r76f Aug 10 23:48:02.753: INFO: Got endpoints: latency-svc-8r76f [850.068036ms] Aug 10 23:48:02.814: INFO: Created: latency-svc-xkgql Aug 10 23:48:02.849: INFO: Got endpoints: latency-svc-xkgql [915.816802ms] Aug 10 23:48:02.868: INFO: Created: latency-svc-79dmb Aug 10 23:48:02.879: INFO: Got endpoints: latency-svc-79dmb [863.098769ms] Aug 10 23:48:02.899: INFO: Created: latency-svc-rkg6q Aug 10 23:48:02.910: INFO: Got endpoints: latency-svc-rkg6q [874.763278ms] Aug 10 23:48:02.968: INFO: Created: latency-svc-wr6sc Aug 10 23:48:02.976: INFO: Got endpoints: latency-svc-wr6sc [898.591138ms] Aug 10 23:48:02.993: INFO: Created: latency-svc-qj4p2 Aug 10 23:48:03.048: INFO: Got endpoints: latency-svc-qj4p2 [932.24659ms] Aug 10 23:48:03.143: INFO: Created: latency-svc-mfv9x Aug 10 23:48:03.151: INFO: Got endpoints: latency-svc-mfv9x [958.712852ms] Aug 10 23:48:03.192: INFO: Created: latency-svc-jgzxf Aug 10 23:48:03.205: INFO: Got endpoints: latency-svc-jgzxf [946.58354ms] Aug 10 23:48:03.234: INFO: Created: latency-svc-5p6gr Aug 10 23:48:03.292: INFO: Got endpoints: latency-svc-5p6gr [932.755908ms] Aug 10 23:48:03.295: INFO: Created: latency-svc-4pwr4 Aug 10 23:48:03.302: INFO: Got endpoints: latency-svc-4pwr4 [880.696959ms] Aug 10 23:48:03.324: INFO: Created: latency-svc-xjk5d Aug 10 23:48:03.343: INFO: Got endpoints: latency-svc-xjk5d [844.393822ms] Aug 10 23:48:03.377: INFO: Created: latency-svc-8m79s Aug 10 23:48:03.466: INFO: Got endpoints: latency-svc-8m79s [936.20466ms] Aug 10 23:48:03.468: INFO: Created: latency-svc-hvdhk Aug 10 23:48:03.477: INFO: Got endpoints: latency-svc-hvdhk [896.615048ms] Aug 10 23:48:03.505: INFO: Created: latency-svc-47nw9 Aug 10 23:48:03.519: INFO: Got endpoints: latency-svc-47nw9 [839.506515ms] Aug 10 23:48:03.540: INFO: Created: latency-svc-dpv2m Aug 10 23:48:03.559: INFO: Got endpoints: latency-svc-dpv2m [835.922341ms] Aug 10 23:48:03.618: INFO: Created: latency-svc-tllps Aug 10 23:48:03.647: INFO: Got endpoints: latency-svc-tllps [893.244583ms] Aug 10 23:48:03.677: INFO: Created: latency-svc-295fz Aug 10 23:48:03.688: INFO: Got endpoints: latency-svc-295fz [839.08631ms] Aug 10 23:48:03.747: INFO: Created: latency-svc-nw4zj Aug 10 23:48:03.761: INFO: Got endpoints: latency-svc-nw4zj [881.805871ms] Aug 10 23:48:03.792: INFO: Created: latency-svc-lmqbm Aug 10 23:48:03.809: INFO: Got endpoints: latency-svc-lmqbm [899.467997ms] Aug 10 23:48:03.826: INFO: Created: latency-svc-vk867 Aug 10 23:48:03.839: INFO: Got endpoints: latency-svc-vk867 [863.277316ms] Aug 10 23:48:03.897: INFO: Created: latency-svc-pnzv9 Aug 10 23:48:03.942: INFO: Got endpoints: latency-svc-pnzv9 [893.780228ms] Aug 10 23:48:03.972: INFO: Created: latency-svc-g7b88 Aug 10 23:48:03.983: INFO: Got endpoints: latency-svc-g7b88 [832.230531ms] Aug 10 23:48:04.045: INFO: Created: latency-svc-7828z Aug 10 23:48:04.072: INFO: Got endpoints: latency-svc-7828z [866.953747ms] Aug 10 23:48:04.073: INFO: Created: latency-svc-rmr5l Aug 10 23:48:04.096: INFO: Got endpoints: latency-svc-rmr5l [804.841941ms] Aug 10 23:48:04.127: INFO: Created: latency-svc-nlp2b Aug 10 23:48:04.190: INFO: Got endpoints: latency-svc-nlp2b [888.288941ms] Aug 10 23:48:04.192: INFO: Created: latency-svc-ct5zz Aug 10 23:48:04.213: INFO: Got endpoints: latency-svc-ct5zz [869.710532ms] Aug 10 23:48:04.278: INFO: Created: latency-svc-gpw5t Aug 10 23:48:04.322: INFO: Got endpoints: latency-svc-gpw5t [856.281732ms] Aug 10 23:48:04.348: INFO: Created: latency-svc-bxpcz Aug 10 23:48:04.379: INFO: Got endpoints: latency-svc-bxpcz [901.351882ms] Aug 10 23:48:04.410: INFO: Created: latency-svc-ccrqk Aug 10 23:48:04.472: INFO: Got endpoints: latency-svc-ccrqk [952.205435ms] Aug 10 23:48:04.482: INFO: Created: latency-svc-hbzdw Aug 10 23:48:04.489: INFO: Got endpoints: latency-svc-hbzdw [930.684265ms] Aug 10 23:48:04.535: INFO: Created: latency-svc-zlr4j Aug 10 23:48:04.544: INFO: Got endpoints: latency-svc-zlr4j [897.058789ms] Aug 10 23:48:04.564: INFO: Created: latency-svc-vhvwl Aug 10 23:48:04.609: INFO: Got endpoints: latency-svc-vhvwl [920.560717ms] Aug 10 23:48:04.638: INFO: Created: latency-svc-thff9 Aug 10 23:48:04.662: INFO: Got endpoints: latency-svc-thff9 [900.965199ms] Aug 10 23:48:04.693: INFO: Created: latency-svc-8hvt5 Aug 10 23:48:04.759: INFO: Got endpoints: latency-svc-8hvt5 [949.357379ms] Aug 10 23:48:04.781: INFO: Created: latency-svc-5zqld Aug 10 23:48:04.797: INFO: Got endpoints: latency-svc-5zqld [957.439251ms] Aug 10 23:48:04.816: INFO: Created: latency-svc-5bcmj Aug 10 23:48:04.827: INFO: Got endpoints: latency-svc-5bcmj [884.761851ms] Aug 10 23:48:04.847: INFO: Created: latency-svc-rt8ng Aug 10 23:48:04.857: INFO: Got endpoints: latency-svc-rt8ng [873.922467ms] Aug 10 23:48:04.910: INFO: Created: latency-svc-2r9cr Aug 10 23:48:04.932: INFO: Got endpoints: latency-svc-2r9cr [859.37733ms] Aug 10 23:48:04.987: INFO: Created: latency-svc-85mhw Aug 10 23:48:05.083: INFO: Got endpoints: latency-svc-85mhw [986.173761ms] Aug 10 23:48:05.093: INFO: Created: latency-svc-wctk4 Aug 10 23:48:05.154: INFO: Got endpoints: latency-svc-wctk4 [963.831678ms] Aug 10 23:48:05.232: INFO: Created: latency-svc-lm4rx Aug 10 23:48:05.248: INFO: Got endpoints: latency-svc-lm4rx [1.034668648s] Aug 10 23:48:05.268: INFO: Created: latency-svc-fj7p5 Aug 10 23:48:05.285: INFO: Got endpoints: latency-svc-fj7p5 [962.577994ms] Aug 10 23:48:05.302: INFO: Created: latency-svc-s7wfr Aug 10 23:48:05.315: INFO: Got endpoints: latency-svc-s7wfr [936.400831ms] Aug 10 23:48:05.370: INFO: Created: latency-svc-fbhbz Aug 10 23:48:05.392: INFO: Got endpoints: latency-svc-fbhbz [920.25291ms] Aug 10 23:48:05.393: INFO: Created: latency-svc-pzmd5 Aug 10 23:48:05.417: INFO: Got endpoints: latency-svc-pzmd5 [928.026268ms] Aug 10 23:48:05.443: INFO: Created: latency-svc-xqb7p Aug 10 23:48:05.466: INFO: Got endpoints: latency-svc-xqb7p [922.473957ms] Aug 10 23:48:05.531: INFO: Created: latency-svc-mn6gc Aug 10 23:48:05.548: INFO: Got endpoints: latency-svc-mn6gc [938.675654ms] Aug 10 23:48:05.584: INFO: Created: latency-svc-nwz4f Aug 10 23:48:05.599: INFO: Got endpoints: latency-svc-nwz4f [936.526278ms] Aug 10 23:48:05.681: INFO: Created: latency-svc-klg6m Aug 10 23:48:05.686: INFO: Got endpoints: latency-svc-klg6m [926.918296ms] Aug 10 23:48:05.723: INFO: Created: latency-svc-vsnhx Aug 10 23:48:05.738: INFO: Got endpoints: latency-svc-vsnhx [940.872693ms] Aug 10 23:48:05.754: INFO: Created: latency-svc-9rgxl Aug 10 23:48:05.866: INFO: Got endpoints: latency-svc-9rgxl [1.039442006s] Aug 10 23:48:05.870: INFO: Created: latency-svc-zplmb Aug 10 23:48:05.903: INFO: Got endpoints: latency-svc-zplmb [1.045871148s] Aug 10 23:48:05.940: INFO: Created: latency-svc-sm5rf Aug 10 23:48:05.948: INFO: Got endpoints: latency-svc-sm5rf [1.01657263s] Aug 10 23:48:06.035: INFO: Created: latency-svc-qhz2k Aug 10 23:48:06.064: INFO: Got endpoints: latency-svc-qhz2k [981.390384ms] Aug 10 23:48:06.070: INFO: Created: latency-svc-8jrxd Aug 10 23:48:06.118: INFO: Got endpoints: latency-svc-8jrxd [964.109697ms] Aug 10 23:48:06.190: INFO: Created: latency-svc-wpnsm Aug 10 23:48:06.203: INFO: Got endpoints: latency-svc-wpnsm [955.213692ms] Aug 10 23:48:06.256: INFO: Created: latency-svc-zkssj Aug 10 23:48:06.279: INFO: Got endpoints: latency-svc-zkssj [994.900619ms] Aug 10 23:48:06.365: INFO: Created: latency-svc-xbrb8 Aug 10 23:48:06.369: INFO: Got endpoints: latency-svc-xbrb8 [1.053932033s] Aug 10 23:48:06.395: INFO: Created: latency-svc-zzst6 Aug 10 23:48:06.406: INFO: Got endpoints: latency-svc-zzst6 [1.01390536s] Aug 10 23:48:06.432: INFO: Created: latency-svc-74wqf Aug 10 23:48:06.448: INFO: Got endpoints: latency-svc-74wqf [1.031020692s] Aug 10 23:48:06.514: INFO: Created: latency-svc-b5cc5 Aug 10 23:48:06.518: INFO: Got endpoints: latency-svc-b5cc5 [1.0520458s] Aug 10 23:48:06.562: INFO: Created: latency-svc-6n9ds Aug 10 23:48:06.575: INFO: Got endpoints: latency-svc-6n9ds [1.027042686s] Aug 10 23:48:06.592: INFO: Created: latency-svc-hvlk4 Aug 10 23:48:06.605: INFO: Got endpoints: latency-svc-hvlk4 [1.006060809s] Aug 10 23:48:06.675: INFO: Created: latency-svc-g2cdd Aug 10 23:48:06.679: INFO: Got endpoints: latency-svc-g2cdd [993.510145ms] Aug 10 23:48:06.713: INFO: Created: latency-svc-bxkp6 Aug 10 23:48:06.738: INFO: Got endpoints: latency-svc-bxkp6 [999.972437ms] Aug 10 23:48:06.760: INFO: Created: latency-svc-l2qxq Aug 10 23:48:06.774: INFO: Got endpoints: latency-svc-l2qxq [907.32698ms] Aug 10 23:48:06.819: INFO: Created: latency-svc-t468m Aug 10 23:48:06.824: INFO: Got endpoints: latency-svc-t468m [920.173639ms] Aug 10 23:48:06.868: INFO: Created: latency-svc-mqlv5 Aug 10 23:48:06.894: INFO: Got endpoints: latency-svc-mqlv5 [945.798028ms] Aug 10 23:48:06.911: INFO: Created: latency-svc-b6xqj Aug 10 23:48:06.969: INFO: Got endpoints: latency-svc-b6xqj [904.525154ms] Aug 10 23:48:07.003: INFO: Created: latency-svc-fd7t2 Aug 10 23:48:07.051: INFO: Got endpoints: latency-svc-fd7t2 [932.157539ms] Aug 10 23:48:07.152: INFO: Created: latency-svc-h8psx Aug 10 23:48:07.183: INFO: Got endpoints: latency-svc-h8psx [980.000059ms] Aug 10 23:48:07.292: INFO: Created: latency-svc-4nfkh Aug 10 23:48:07.303: INFO: Got endpoints: latency-svc-4nfkh [1.023207345s] Aug 10 23:48:07.323: INFO: Created: latency-svc-wvxkr Aug 10 23:48:07.342: INFO: Got endpoints: latency-svc-wvxkr [973.174628ms] Aug 10 23:48:07.378: INFO: Created: latency-svc-9kpql Aug 10 23:48:07.448: INFO: Got endpoints: latency-svc-9kpql [1.041420521s] Aug 10 23:48:07.452: INFO: Created: latency-svc-w67pr Aug 10 23:48:07.472: INFO: Got endpoints: latency-svc-w67pr [1.02389046s] Aug 10 23:48:07.536: INFO: Created: latency-svc-29p6s Aug 10 23:48:07.616: INFO: Got endpoints: latency-svc-29p6s [1.097257606s] Aug 10 23:48:07.618: INFO: Created: latency-svc-qlfz4 Aug 10 23:48:07.640: INFO: Got endpoints: latency-svc-qlfz4 [1.065066615s] Aug 10 23:48:07.777: INFO: Created: latency-svc-9ltgr Aug 10 23:48:07.782: INFO: Got endpoints: latency-svc-9ltgr [1.176526887s] Aug 10 23:48:07.810: INFO: Created: latency-svc-kc2k6 Aug 10 23:48:07.821: INFO: Got endpoints: latency-svc-kc2k6 [1.141847524s] Aug 10 23:48:07.854: INFO: Created: latency-svc-z6rkn Aug 10 23:48:07.869: INFO: Got endpoints: latency-svc-z6rkn [1.131741321s] Aug 10 23:48:07.960: INFO: Created: latency-svc-k76gt Aug 10 23:48:07.990: INFO: Got endpoints: latency-svc-k76gt [1.215771914s] Aug 10 23:48:08.082: INFO: Created: latency-svc-pf8cb Aug 10 23:48:08.177: INFO: Got endpoints: latency-svc-pf8cb [1.353676905s] Aug 10 23:48:08.249: INFO: Created: latency-svc-r2gdq Aug 10 23:48:08.259: INFO: Got endpoints: latency-svc-r2gdq [1.365086201s] Aug 10 23:48:08.279: INFO: Created: latency-svc-llnbx Aug 10 23:48:08.295: INFO: Got endpoints: latency-svc-llnbx [1.326694042s] Aug 10 23:48:08.321: INFO: Created: latency-svc-885fb Aug 10 23:48:08.332: INFO: Got endpoints: latency-svc-885fb [1.281713689s] Aug 10 23:48:08.406: INFO: Created: latency-svc-h5ndw Aug 10 23:48:08.417: INFO: Got endpoints: latency-svc-h5ndw [1.233854502s] Aug 10 23:48:08.433: INFO: Created: latency-svc-cmgrb Aug 10 23:48:08.447: INFO: Got endpoints: latency-svc-cmgrb [1.14416296s] Aug 10 23:48:08.464: INFO: Created: latency-svc-6rnr8 Aug 10 23:48:08.478: INFO: Got endpoints: latency-svc-6rnr8 [1.135721553s] Aug 10 23:48:08.495: INFO: Created: latency-svc-xqxqv Aug 10 23:48:08.543: INFO: Got endpoints: latency-svc-xqxqv [1.095517486s] Aug 10 23:48:08.561: INFO: Created: latency-svc-lvjql Aug 10 23:48:08.581: INFO: Got endpoints: latency-svc-lvjql [1.108224063s] Aug 10 23:48:08.597: INFO: Created: latency-svc-kpbk9 Aug 10 23:48:08.627: INFO: Got endpoints: latency-svc-kpbk9 [1.010882665s] Aug 10 23:48:08.681: INFO: Created: latency-svc-hdzvn Aug 10 23:48:08.712: INFO: Got endpoints: latency-svc-hdzvn [1.071868001s] Aug 10 23:48:08.716: INFO: Created: latency-svc-tz6gt Aug 10 23:48:08.740: INFO: Got endpoints: latency-svc-tz6gt [957.884993ms] Aug 10 23:48:08.831: INFO: Created: latency-svc-bzwss Aug 10 23:48:08.871: INFO: Got endpoints: latency-svc-bzwss [1.050131579s] Aug 10 23:48:08.872: INFO: Created: latency-svc-mpqtc Aug 10 23:48:08.902: INFO: Got endpoints: latency-svc-mpqtc [1.032043614s] Aug 10 23:48:08.975: INFO: Created: latency-svc-rf529 Aug 10 23:48:08.981: INFO: Got endpoints: latency-svc-rf529 [990.827773ms] Aug 10 23:48:09.030: INFO: Created: latency-svc-kxwvj Aug 10 23:48:09.044: INFO: Got endpoints: latency-svc-kxwvj [867.014058ms] Aug 10 23:48:09.065: INFO: Created: latency-svc-mkm7c Aug 10 23:48:09.143: INFO: Got endpoints: latency-svc-mkm7c [883.131538ms] Aug 10 23:48:09.145: INFO: Created: latency-svc-6m9nt Aug 10 23:48:09.153: INFO: Got endpoints: latency-svc-6m9nt [857.087252ms] Aug 10 23:48:09.178: INFO: Created: latency-svc-w6g5b Aug 10 23:48:09.195: INFO: Got endpoints: latency-svc-w6g5b [862.818069ms] Aug 10 23:48:09.233: INFO: Created: latency-svc-th7gq Aug 10 23:48:09.298: INFO: Got endpoints: latency-svc-th7gq [881.037212ms] Aug 10 23:48:09.311: INFO: Created: latency-svc-5n4nz Aug 10 23:48:09.329: INFO: Got endpoints: latency-svc-5n4nz [882.049809ms] Aug 10 23:48:09.370: INFO: Created: latency-svc-2x7cp Aug 10 23:48:09.382: INFO: Got endpoints: latency-svc-2x7cp [904.054448ms] Aug 10 23:48:09.451: INFO: Created: latency-svc-jxbs9 Aug 10 23:48:09.460: INFO: Got endpoints: latency-svc-jxbs9 [916.802922ms] Aug 10 23:48:09.484: INFO: Created: latency-svc-zlhbh Aug 10 23:48:09.509: INFO: Got endpoints: latency-svc-zlhbh [928.218577ms] Aug 10 23:48:09.540: INFO: Created: latency-svc-xrdgt Aug 10 23:48:09.598: INFO: Got endpoints: latency-svc-xrdgt [971.052674ms] Aug 10 23:48:09.646: INFO: Created: latency-svc-6gnqn Aug 10 23:48:09.682: INFO: Got endpoints: latency-svc-6gnqn [969.874633ms] Aug 10 23:48:09.736: INFO: Created: latency-svc-vqnqv Aug 10 23:48:09.749: INFO: Got endpoints: latency-svc-vqnqv [1.00921985s] Aug 10 23:48:09.767: INFO: Created: latency-svc-hzjf4 Aug 10 23:48:09.779: INFO: Got endpoints: latency-svc-hzjf4 [907.439638ms] Aug 10 23:48:09.827: INFO: Created: latency-svc-j8465 Aug 10 23:48:09.910: INFO: Got endpoints: latency-svc-j8465 [1.008454408s] Aug 10 23:48:09.912: INFO: Created: latency-svc-n8259 Aug 10 23:48:09.939: INFO: Got endpoints: latency-svc-n8259 [958.667305ms] Aug 10 23:48:10.071: INFO: Created: latency-svc-vhxmn Aug 10 23:48:10.097: INFO: Got endpoints: latency-svc-vhxmn [1.052319191s] Aug 10 23:48:10.097: INFO: Created: latency-svc-wvnv4 Aug 10 23:48:10.122: INFO: Got endpoints: latency-svc-wvnv4 [979.725843ms] Aug 10 23:48:10.161: INFO: Created: latency-svc-f65qn Aug 10 23:48:10.219: INFO: Got endpoints: latency-svc-f65qn [1.066690665s] Aug 10 23:48:10.220: INFO: Latencies: [70.316287ms 143.058428ms 196.33755ms 226.079709ms 286.254838ms 421.577012ms 436.299205ms 466.937772ms 497.338062ms 561.849996ms 616.992346ms 657.880991ms 761.163712ms 794.865642ms 804.841941ms 811.390174ms 832.230531ms 835.922341ms 839.08631ms 839.506515ms 844.393822ms 849.467592ms 850.068036ms 856.281732ms 857.087252ms 859.37733ms 862.818069ms 863.098769ms 863.277316ms 864.775528ms 866.953747ms 867.014058ms 869.710532ms 873.922467ms 874.763278ms 880.696959ms 881.037212ms 881.805871ms 882.049809ms 883.131538ms 884.761851ms 886.39039ms 887.341657ms 888.288941ms 890.770259ms 893.225942ms 893.244583ms 893.780228ms 896.615048ms 897.058789ms 898.591138ms 899.467997ms 900.965199ms 901.351882ms 904.054448ms 904.525154ms 907.32698ms 907.439638ms 912.181606ms 915.816802ms 916.802922ms 920.173639ms 920.25291ms 920.560717ms 922.260993ms 922.473957ms 926.918296ms 928.026268ms 928.218577ms 928.328958ms 930.684265ms 932.157539ms 932.24659ms 932.755908ms 932.92703ms 936.20466ms 936.400831ms 936.526278ms 938.675654ms 939.133672ms 940.872693ms 945.798028ms 946.58354ms 949.357379ms 952.205435ms 953.449366ms 955.213692ms 957.439251ms 957.884993ms 958.667305ms 958.712852ms 959.298922ms 962.577994ms 963.831678ms 964.109697ms 969.874633ms 971.052674ms 973.174628ms 973.351519ms 979.725843ms 980.000059ms 981.390384ms 986.173761ms 990.827773ms 993.510145ms 994.900619ms 997.663668ms 999.972437ms 1.005673888s 1.006060809s 1.008454408s 1.00921985s 1.010882665s 1.01390536s 1.01657263s 1.020036581s 1.02127665s 1.023207345s 1.02389046s 1.027042686s 1.031020692s 1.032043614s 1.034384702s 1.034668648s 1.039442006s 1.041420521s 1.045871148s 1.045916123s 1.050131579s 1.0520458s 1.052319191s 1.053932033s 1.056425637s 1.059711646s 1.065066615s 1.066690665s 1.071868001s 1.09100984s 1.092432237s 1.095517486s 1.097257606s 1.108224063s 1.108225281s 1.109375284s 1.116432635s 1.12187597s 1.131741321s 1.132225073s 1.135721553s 1.137605313s 1.140752066s 1.141847524s 1.14416296s 1.149716309s 1.152461499s 1.155857183s 1.164316384s 1.172888465s 1.175311327s 1.176397741s 1.176526887s 1.18355271s 1.185615611s 1.187380114s 1.214798381s 1.215771914s 1.224330549s 1.228687394s 1.233854502s 1.241187949s 1.244971522s 1.250367523s 1.253647712s 1.263793328s 1.281713689s 1.287619876s 1.292325979s 1.294791796s 1.326694042s 1.353676905s 1.365086201s 1.394018506s 1.456036234s 1.504479396s 1.77103296s 1.961966193s 2.202401585s 2.271838943s 2.426005376s 2.475110812s 2.481840888s 2.548272495s 2.655857556s 2.677163448s 2.766749376s 2.815088183s 2.83234739s 2.834026698s 2.836830802s 2.850879723s] Aug 10 23:48:10.220: INFO: 50 %ile: 980.000059ms Aug 10 23:48:10.220: INFO: 90 %ile: 1.365086201s Aug 10 23:48:10.220: INFO: 99 %ile: 2.836830802s Aug 10 23:48:10.220: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:48:10.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5175" for this suite. • [SLOW TEST:19.517 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":303,"completed":76,"skipped":1249,"failed":0} SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:48:10.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Aug 10 23:48:10.816: INFO: created pod pod-service-account-defaultsa Aug 10 23:48:10.816: INFO: pod pod-service-account-defaultsa service account token volume mount: true Aug 10 23:48:10.829: INFO: created pod pod-service-account-mountsa Aug 10 23:48:10.829: INFO: pod pod-service-account-mountsa service account token volume mount: true Aug 10 23:48:10.850: INFO: created pod pod-service-account-nomountsa Aug 10 23:48:10.850: INFO: pod pod-service-account-nomountsa service account token volume mount: false Aug 10 23:48:10.865: INFO: created pod pod-service-account-defaultsa-mountspec Aug 10 23:48:10.865: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Aug 10 23:48:10.880: INFO: created pod pod-service-account-mountsa-mountspec Aug 10 23:48:10.880: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Aug 10 23:48:10.940: INFO: created pod pod-service-account-nomountsa-mountspec Aug 10 23:48:10.940: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Aug 10 23:48:10.973: INFO: created pod pod-service-account-defaultsa-nomountspec Aug 10 23:48:10.973: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Aug 10 23:48:10.990: INFO: created pod pod-service-account-mountsa-nomountspec Aug 10 23:48:10.990: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Aug 10 23:48:11.023: INFO: created pod pod-service-account-nomountsa-nomountspec Aug 10 23:48:11.023: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:48:11.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5400" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":303,"completed":77,"skipped":1260,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:48:11.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 23:48:13.472: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 23:48:16.265: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732700093, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732700093, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732700093, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732700093, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 10 23:48:18.473: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732700093, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732700093, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732700093, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732700093, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 10 23:48:20.415: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732700093, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732700093, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732700093, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732700093, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 23:48:23.332: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Aug 10 23:48:24.332: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Aug 10 23:48:25.332: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Aug 10 23:48:26.332: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:48:26.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7228-crds.webhook.example.com via the AdmissionRegistration API Aug 10 23:48:28.915: INFO: Waiting for webhook configuration to be ready... Aug 10 23:48:30.092: INFO: Waiting for webhook configuration to be ready... Aug 10 23:48:31.198: INFO: Waiting for webhook configuration to be ready... STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:48:32.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1170" for this suite. STEP: Destroying namespace "webhook-1170-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.946 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":303,"completed":78,"skipped":1264,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:48:33.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:48:33.147: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:48:40.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8020" for this suite. • [SLOW TEST:7.016 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":303,"completed":79,"skipped":1268,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:48:40.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:48:40.369: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Aug 10 23:48:43.896: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6012 create -f -' Aug 10 23:48:48.195: INFO: stderr: "" Aug 10 23:48:48.195: INFO: stdout: "e2e-test-crd-publish-openapi-1639-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 10 23:48:48.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6012 delete e2e-test-crd-publish-openapi-1639-crds test-foo' Aug 10 23:48:48.324: INFO: stderr: "" Aug 10 23:48:48.324: INFO: stdout: "e2e-test-crd-publish-openapi-1639-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Aug 10 23:48:48.324: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6012 apply -f -' Aug 10 23:48:48.693: INFO: stderr: "" Aug 10 23:48:48.693: INFO: stdout: "e2e-test-crd-publish-openapi-1639-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 10 23:48:48.693: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6012 delete e2e-test-crd-publish-openapi-1639-crds test-foo' Aug 10 23:48:48.856: INFO: stderr: "" Aug 10 23:48:48.856: INFO: stdout: "e2e-test-crd-publish-openapi-1639-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Aug 10 23:48:48.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6012 create -f -' Aug 10 23:48:49.151: INFO: rc: 1 Aug 10 23:48:49.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6012 apply -f -' Aug 10 23:48:49.481: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Aug 10 23:48:49.481: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6012 create -f -' Aug 10 23:48:49.783: INFO: rc: 1 Aug 10 23:48:49.783: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6012 apply -f -' Aug 10 23:48:50.094: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Aug 10 23:48:50.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1639-crds' Aug 10 23:48:50.449: INFO: stderr: "" Aug 10 23:48:50.449: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1639-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Aug 10 23:48:50.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1639-crds.metadata' Aug 10 23:48:50.804: INFO: stderr: "" Aug 10 23:48:50.804: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1639-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Aug 10 23:48:50.805: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1639-crds.spec' Aug 10 23:48:51.135: INFO: stderr: "" Aug 10 23:48:51.135: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1639-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Aug 10 23:48:51.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1639-crds.spec.bars' Aug 10 23:48:51.490: INFO: stderr: "" Aug 10 23:48:51.490: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1639-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Aug 10 23:48:51.490: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1639-crds.spec.bars2' Aug 10 23:48:51.778: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:48:55.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6012" for this suite. • [SLOW TEST:15.248 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":303,"completed":80,"skipped":1269,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:48:55.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 10 23:48:55.413: INFO: Waiting up to 5m0s for pod "pod-890b5054-3bc6-43c0-b67e-7b69081c603f" in namespace "emptydir-8424" to be "Succeeded or Failed" Aug 10 23:48:55.431: INFO: Pod "pod-890b5054-3bc6-43c0-b67e-7b69081c603f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.317279ms Aug 10 23:48:57.538: INFO: Pod "pod-890b5054-3bc6-43c0-b67e-7b69081c603f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124803857s Aug 10 23:48:59.542: INFO: Pod "pod-890b5054-3bc6-43c0-b67e-7b69081c603f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128864135s STEP: Saw pod success Aug 10 23:48:59.542: INFO: Pod "pod-890b5054-3bc6-43c0-b67e-7b69081c603f" satisfied condition "Succeeded or Failed" Aug 10 23:48:59.545: INFO: Trying to get logs from node latest-worker2 pod pod-890b5054-3bc6-43c0-b67e-7b69081c603f container test-container: STEP: delete the pod Aug 10 23:48:59.774: INFO: Waiting for pod pod-890b5054-3bc6-43c0-b67e-7b69081c603f to disappear Aug 10 23:48:59.781: INFO: Pod pod-890b5054-3bc6-43c0-b67e-7b69081c603f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:48:59.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8424" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":81,"skipped":1270,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:48:59.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:49:15.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5484" for this suite. • [SLOW TEST:16.167 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":303,"completed":82,"skipped":1277,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:49:15.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-3165862d-dfd9-415a-84c7-21cdde4131ee STEP: Creating a pod to test consume configMaps Aug 10 23:49:16.056: INFO: Waiting up to 5m0s for pod "pod-configmaps-555f9890-9254-4b8b-8ea5-9c774b1a17d5" in namespace "configmap-1482" to be "Succeeded or Failed" Aug 10 23:49:16.078: INFO: Pod "pod-configmaps-555f9890-9254-4b8b-8ea5-9c774b1a17d5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.784592ms Aug 10 23:49:18.082: INFO: Pod "pod-configmaps-555f9890-9254-4b8b-8ea5-9c774b1a17d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025665326s Aug 10 23:49:20.086: INFO: Pod "pod-configmaps-555f9890-9254-4b8b-8ea5-9c774b1a17d5": Phase="Running", Reason="", readiness=true. Elapsed: 4.029769361s Aug 10 23:49:22.090: INFO: Pod "pod-configmaps-555f9890-9254-4b8b-8ea5-9c774b1a17d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034075197s STEP: Saw pod success Aug 10 23:49:22.090: INFO: Pod "pod-configmaps-555f9890-9254-4b8b-8ea5-9c774b1a17d5" satisfied condition "Succeeded or Failed" Aug 10 23:49:22.093: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-555f9890-9254-4b8b-8ea5-9c774b1a17d5 container configmap-volume-test: STEP: delete the pod Aug 10 23:49:22.118: INFO: Waiting for pod pod-configmaps-555f9890-9254-4b8b-8ea5-9c774b1a17d5 to disappear Aug 10 23:49:22.139: INFO: Pod pod-configmaps-555f9890-9254-4b8b-8ea5-9c774b1a17d5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:49:22.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1482" for this suite. • [SLOW TEST:6.186 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":83,"skipped":1277,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:49:22.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:49:26.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6293" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":303,"completed":84,"skipped":1279,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:49:26.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 10 23:49:26.346: INFO: Waiting up to 5m0s for pod "downwardapi-volume-afd7572f-ae43-40bd-9553-51c47dbf51c7" in namespace "projected-3367" to be "Succeeded or Failed" Aug 10 23:49:26.349: INFO: Pod "downwardapi-volume-afd7572f-ae43-40bd-9553-51c47dbf51c7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.677408ms Aug 10 23:49:28.353: INFO: Pod "downwardapi-volume-afd7572f-ae43-40bd-9553-51c47dbf51c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007002548s Aug 10 23:49:30.356: INFO: Pod "downwardapi-volume-afd7572f-ae43-40bd-9553-51c47dbf51c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010822417s STEP: Saw pod success Aug 10 23:49:30.357: INFO: Pod "downwardapi-volume-afd7572f-ae43-40bd-9553-51c47dbf51c7" satisfied condition "Succeeded or Failed" Aug 10 23:49:30.359: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-afd7572f-ae43-40bd-9553-51c47dbf51c7 container client-container: STEP: delete the pod Aug 10 23:49:30.391: INFO: Waiting for pod downwardapi-volume-afd7572f-ae43-40bd-9553-51c47dbf51c7 to disappear Aug 10 23:49:30.397: INFO: Pod downwardapi-volume-afd7572f-ae43-40bd-9553-51c47dbf51c7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:49:30.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3367" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":303,"completed":85,"skipped":1287,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:49:30.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:49:30.549: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"0675472a-6b57-48a0-918b-b7edc69318f9", Controller:(*bool)(0xc0061b87b2), BlockOwnerDeletion:(*bool)(0xc0061b87b3)}} Aug 10 23:49:30.590: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"b0ea997c-a47c-4cc4-9ede-7280f1ede91f", Controller:(*bool)(0xc0061d87da), BlockOwnerDeletion:(*bool)(0xc0061d87db)}} Aug 10 23:49:30.628: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"16386860-aaba-488b-af19-d54c16dc9bac", Controller:(*bool)(0xc0061d89aa), BlockOwnerDeletion:(*bool)(0xc0061d89ab)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:49:35.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7240" for this suite. • [SLOW TEST:5.278 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":303,"completed":86,"skipped":1292,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:49:35.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:49:35.732: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config version' Aug 10 23:49:35.912: INFO: stderr: "" Aug 10 23:49:35.912: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"20+\", GitVersion:\"v1.20.0-alpha.0.523+97c5f1f7632f2d\", GitCommit:\"97c5f1f7632f2d349303515830be76f6c1084b19\", GitTreeState:\"clean\", BuildDate:\"2020-08-07T13:25:26Z\", GoVersion:\"go1.14.7\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-rc.1\", GitCommit:\"2cbdfecbbd57dbd4e9f42d73a75fbbc6d9eadfd3\", GitTreeState:\"clean\", BuildDate:\"2020-07-19T21:33:31Z\", GoVersion:\"go1.14.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:49:35.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1016" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":303,"completed":87,"skipped":1294,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:49:35.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-6704 STEP: creating replication controller nodeport-test in namespace services-6704 I0810 23:49:36.108444 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6704, replica count: 2 I0810 23:49:39.158808 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 23:49:42.159101 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 10 23:49:42.159: INFO: Creating new exec pod Aug 10 23:49:47.207: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-6704 execpodr7g8t -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Aug 10 23:49:47.462: INFO: stderr: "I0810 23:49:47.352201 1701 log.go:181] (0xc0006c5340) (0xc0009e9720) Create stream\nI0810 23:49:47.352258 1701 log.go:181] (0xc0006c5340) (0xc0009e9720) Stream added, broadcasting: 1\nI0810 23:49:47.354652 1701 log.go:181] (0xc0006c5340) Reply frame received for 1\nI0810 23:49:47.354693 1701 log.go:181] (0xc0006c5340) (0xc000b1c1e0) Create stream\nI0810 23:49:47.354738 1701 log.go:181] (0xc0006c5340) (0xc000b1c1e0) Stream added, broadcasting: 3\nI0810 23:49:47.355564 1701 log.go:181] (0xc0006c5340) Reply frame received for 3\nI0810 23:49:47.355600 1701 log.go:181] (0xc0006c5340) (0xc000989400) Create stream\nI0810 23:49:47.355609 1701 log.go:181] (0xc0006c5340) (0xc000989400) Stream added, broadcasting: 5\nI0810 23:49:47.356460 1701 log.go:181] (0xc0006c5340) Reply frame received for 5\nI0810 23:49:47.452917 1701 log.go:181] (0xc0006c5340) Data frame received for 5\nI0810 23:49:47.452951 1701 log.go:181] (0xc000989400) (5) Data frame handling\nI0810 23:49:47.452978 1701 log.go:181] (0xc000989400) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0810 23:49:47.453402 1701 log.go:181] (0xc0006c5340) Data frame received for 5\nI0810 23:49:47.453425 1701 log.go:181] (0xc000989400) (5) Data frame handling\nI0810 23:49:47.453440 1701 log.go:181] (0xc000989400) (5) Data frame sent\nI0810 23:49:47.453446 1701 log.go:181] (0xc0006c5340) Data frame received for 5\nI0810 23:49:47.453452 1701 log.go:181] (0xc000989400) (5) Data frame handling\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0810 23:49:47.453503 1701 log.go:181] (0xc0006c5340) Data frame received for 3\nI0810 23:49:47.453530 1701 log.go:181] (0xc000b1c1e0) (3) Data frame handling\nI0810 23:49:47.455746 1701 log.go:181] (0xc0006c5340) Data frame received for 1\nI0810 23:49:47.455782 1701 log.go:181] (0xc0009e9720) (1) Data frame handling\nI0810 23:49:47.455802 1701 log.go:181] (0xc0009e9720) (1) Data frame sent\nI0810 23:49:47.455827 1701 log.go:181] (0xc0006c5340) (0xc0009e9720) Stream removed, broadcasting: 1\nI0810 23:49:47.455846 1701 log.go:181] (0xc0006c5340) Go away received\nI0810 23:49:47.456218 1701 log.go:181] (0xc0006c5340) (0xc0009e9720) Stream removed, broadcasting: 1\nI0810 23:49:47.456236 1701 log.go:181] (0xc0006c5340) (0xc000b1c1e0) Stream removed, broadcasting: 3\nI0810 23:49:47.456243 1701 log.go:181] (0xc0006c5340) (0xc000989400) Stream removed, broadcasting: 5\n" Aug 10 23:49:47.462: INFO: stdout: "" Aug 10 23:49:47.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-6704 execpodr7g8t -- /bin/sh -x -c nc -zv -t -w 2 10.103.98.33 80' Aug 10 23:49:47.668: INFO: stderr: "I0810 23:49:47.600186 1719 log.go:181] (0xc000550000) (0xc000822460) Create stream\nI0810 23:49:47.600235 1719 log.go:181] (0xc000550000) (0xc000822460) Stream added, broadcasting: 1\nI0810 23:49:47.602159 1719 log.go:181] (0xc000550000) Reply frame received for 1\nI0810 23:49:47.602221 1719 log.go:181] (0xc000550000) (0xc0006240a0) Create stream\nI0810 23:49:47.602236 1719 log.go:181] (0xc000550000) (0xc0006240a0) Stream added, broadcasting: 3\nI0810 23:49:47.603319 1719 log.go:181] (0xc000550000) Reply frame received for 3\nI0810 23:49:47.603357 1719 log.go:181] (0xc000550000) (0xc00055c5a0) Create stream\nI0810 23:49:47.603375 1719 log.go:181] (0xc000550000) (0xc00055c5a0) Stream added, broadcasting: 5\nI0810 23:49:47.604414 1719 log.go:181] (0xc000550000) Reply frame received for 5\nI0810 23:49:47.660618 1719 log.go:181] (0xc000550000) Data frame received for 3\nI0810 23:49:47.660673 1719 log.go:181] (0xc0006240a0) (3) Data frame handling\nI0810 23:49:47.660710 1719 log.go:181] (0xc000550000) Data frame received for 5\nI0810 23:49:47.660844 1719 log.go:181] (0xc00055c5a0) (5) Data frame handling\nI0810 23:49:47.660874 1719 log.go:181] (0xc00055c5a0) (5) Data frame sent\nI0810 23:49:47.660895 1719 log.go:181] (0xc000550000) Data frame received for 5\nI0810 23:49:47.660910 1719 log.go:181] (0xc00055c5a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.98.33 80\nConnection to 10.103.98.33 80 port [tcp/http] succeeded!\nI0810 23:49:47.662737 1719 log.go:181] (0xc000550000) Data frame received for 1\nI0810 23:49:47.662757 1719 log.go:181] (0xc000822460) (1) Data frame handling\nI0810 23:49:47.662773 1719 log.go:181] (0xc000822460) (1) Data frame sent\nI0810 23:49:47.662783 1719 log.go:181] (0xc000550000) (0xc000822460) Stream removed, broadcasting: 1\nI0810 23:49:47.662793 1719 log.go:181] (0xc000550000) Go away received\nI0810 23:49:47.663406 1719 log.go:181] (0xc000550000) (0xc000822460) Stream removed, broadcasting: 1\nI0810 23:49:47.663428 1719 log.go:181] (0xc000550000) (0xc0006240a0) Stream removed, broadcasting: 3\nI0810 23:49:47.663438 1719 log.go:181] (0xc000550000) (0xc00055c5a0) Stream removed, broadcasting: 5\n" Aug 10 23:49:47.669: INFO: stdout: "" Aug 10 23:49:47.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-6704 execpodr7g8t -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31673' Aug 10 23:49:47.883: INFO: stderr: "I0810 23:49:47.801825 1737 log.go:181] (0xc000d84000) (0xc000a7a000) Create stream\nI0810 23:49:47.801909 1737 log.go:181] (0xc000d84000) (0xc000a7a000) Stream added, broadcasting: 1\nI0810 23:49:47.804228 1737 log.go:181] (0xc000d84000) Reply frame received for 1\nI0810 23:49:47.804272 1737 log.go:181] (0xc000d84000) (0xc000a7b360) Create stream\nI0810 23:49:47.804283 1737 log.go:181] (0xc000d84000) (0xc000a7b360) Stream added, broadcasting: 3\nI0810 23:49:47.805747 1737 log.go:181] (0xc000d84000) Reply frame received for 3\nI0810 23:49:47.805806 1737 log.go:181] (0xc000d84000) (0xc000980640) Create stream\nI0810 23:49:47.805840 1737 log.go:181] (0xc000d84000) (0xc000980640) Stream added, broadcasting: 5\nI0810 23:49:47.806743 1737 log.go:181] (0xc000d84000) Reply frame received for 5\nI0810 23:49:47.876426 1737 log.go:181] (0xc000d84000) Data frame received for 3\nI0810 23:49:47.876456 1737 log.go:181] (0xc000a7b360) (3) Data frame handling\nI0810 23:49:47.876476 1737 log.go:181] (0xc000d84000) Data frame received for 5\nI0810 23:49:47.876494 1737 log.go:181] (0xc000980640) (5) Data frame handling\nI0810 23:49:47.876513 1737 log.go:181] (0xc000980640) (5) Data frame sent\nI0810 23:49:47.876526 1737 log.go:181] (0xc000d84000) Data frame received for 5\nI0810 23:49:47.876538 1737 log.go:181] (0xc000980640) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 31673\nConnection to 172.18.0.14 31673 port [tcp/31673] succeeded!\nI0810 23:49:47.877900 1737 log.go:181] (0xc000d84000) Data frame received for 1\nI0810 23:49:47.877918 1737 log.go:181] (0xc000a7a000) (1) Data frame handling\nI0810 23:49:47.877936 1737 log.go:181] (0xc000a7a000) (1) Data frame sent\nI0810 23:49:47.877956 1737 log.go:181] (0xc000d84000) (0xc000a7a000) Stream removed, broadcasting: 1\nI0810 23:49:47.878096 1737 log.go:181] (0xc000d84000) Go away received\nI0810 23:49:47.878296 1737 log.go:181] (0xc000d84000) (0xc000a7a000) Stream removed, broadcasting: 1\nI0810 23:49:47.878309 1737 log.go:181] (0xc000d84000) (0xc000a7b360) Stream removed, broadcasting: 3\nI0810 23:49:47.878315 1737 log.go:181] (0xc000d84000) (0xc000980640) Stream removed, broadcasting: 5\n" Aug 10 23:49:47.883: INFO: stdout: "" Aug 10 23:49:47.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-6704 execpodr7g8t -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31673' Aug 10 23:49:48.111: INFO: stderr: "I0810 23:49:48.027669 1755 log.go:181] (0xc0001e4fd0) (0xc000b248c0) Create stream\nI0810 23:49:48.027731 1755 log.go:181] (0xc0001e4fd0) (0xc000b248c0) Stream added, broadcasting: 1\nI0810 23:49:48.035270 1755 log.go:181] (0xc0001e4fd0) Reply frame received for 1\nI0810 23:49:48.035326 1755 log.go:181] (0xc0001e4fd0) (0xc00049aa00) Create stream\nI0810 23:49:48.035342 1755 log.go:181] (0xc0001e4fd0) (0xc00049aa00) Stream added, broadcasting: 3\nI0810 23:49:48.036497 1755 log.go:181] (0xc0001e4fd0) Reply frame received for 3\nI0810 23:49:48.036539 1755 log.go:181] (0xc0001e4fd0) (0xc0003366e0) Create stream\nI0810 23:49:48.036571 1755 log.go:181] (0xc0001e4fd0) (0xc0003366e0) Stream added, broadcasting: 5\nI0810 23:49:48.039166 1755 log.go:181] (0xc0001e4fd0) Reply frame received for 5\nI0810 23:49:48.103459 1755 log.go:181] (0xc0001e4fd0) Data frame received for 3\nI0810 23:49:48.103494 1755 log.go:181] (0xc00049aa00) (3) Data frame handling\nI0810 23:49:48.103592 1755 log.go:181] (0xc0001e4fd0) Data frame received for 5\nI0810 23:49:48.103599 1755 log.go:181] (0xc0003366e0) (5) Data frame handling\nI0810 23:49:48.103605 1755 log.go:181] (0xc0003366e0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 31673\nConnection to 172.18.0.12 31673 port [tcp/31673] succeeded!\nI0810 23:49:48.103688 1755 log.go:181] (0xc0001e4fd0) Data frame received for 5\nI0810 23:49:48.103728 1755 log.go:181] (0xc0003366e0) (5) Data frame handling\nI0810 23:49:48.105411 1755 log.go:181] (0xc0001e4fd0) Data frame received for 1\nI0810 23:49:48.105430 1755 log.go:181] (0xc000b248c0) (1) Data frame handling\nI0810 23:49:48.105446 1755 log.go:181] (0xc000b248c0) (1) Data frame sent\nI0810 23:49:48.105457 1755 log.go:181] (0xc0001e4fd0) (0xc000b248c0) Stream removed, broadcasting: 1\nI0810 23:49:48.105553 1755 log.go:181] (0xc0001e4fd0) Go away received\nI0810 23:49:48.105793 1755 log.go:181] (0xc0001e4fd0) (0xc000b248c0) Stream removed, broadcasting: 1\nI0810 23:49:48.105810 1755 log.go:181] (0xc0001e4fd0) (0xc00049aa00) Stream removed, broadcasting: 3\nI0810 23:49:48.105818 1755 log.go:181] (0xc0001e4fd0) (0xc0003366e0) Stream removed, broadcasting: 5\n" Aug 10 23:49:48.111: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:49:48.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6704" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.197 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":303,"completed":88,"skipped":1305,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:49:48.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Aug 10 23:49:48.173: INFO: >>> kubeConfig: /root/.kube/config Aug 10 23:49:51.169: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:50:04.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8958" for this suite. • [SLOW TEST:16.086 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":303,"completed":89,"skipped":1331,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:50:04.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:52:04.329: INFO: Deleting pod "var-expansion-9237ac80-7cbd-4a9b-a728-bbd26eedfa21" in namespace "var-expansion-988" Aug 10 23:52:04.332: INFO: Wait up to 5m0s for pod "var-expansion-9237ac80-7cbd-4a9b-a728-bbd26eedfa21" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:52:08.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-988" for this suite. • [SLOW TEST:124.170 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":303,"completed":90,"skipped":1376,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:52:08.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-ef9faaf6-04dd-4c9f-b303-8e540a666ee1 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-ef9faaf6-04dd-4c9f-b303-8e540a666ee1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:52:14.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5897" for this suite. • [SLOW TEST:6.189 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":91,"skipped":1494,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:52:14.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-qzpp STEP: Creating a pod to test atomic-volume-subpath Aug 10 23:52:14.697: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qzpp" in namespace "subpath-8851" to be "Succeeded or Failed" Aug 10 23:52:14.732: INFO: Pod "pod-subpath-test-configmap-qzpp": Phase="Pending", Reason="", readiness=false. Elapsed: 35.234764ms Aug 10 23:52:16.736: INFO: Pod "pod-subpath-test-configmap-qzpp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039499115s Aug 10 23:52:18.740: INFO: Pod "pod-subpath-test-configmap-qzpp": Phase="Running", Reason="", readiness=true. Elapsed: 4.043088092s Aug 10 23:52:20.743: INFO: Pod "pod-subpath-test-configmap-qzpp": Phase="Running", Reason="", readiness=true. Elapsed: 6.046638886s Aug 10 23:52:22.747: INFO: Pod "pod-subpath-test-configmap-qzpp": Phase="Running", Reason="", readiness=true. Elapsed: 8.050080653s Aug 10 23:52:24.750: INFO: Pod "pod-subpath-test-configmap-qzpp": Phase="Running", Reason="", readiness=true. Elapsed: 10.053381884s Aug 10 23:52:26.754: INFO: Pod "pod-subpath-test-configmap-qzpp": Phase="Running", Reason="", readiness=true. Elapsed: 12.057144081s Aug 10 23:52:28.758: INFO: Pod "pod-subpath-test-configmap-qzpp": Phase="Running", Reason="", readiness=true. Elapsed: 14.061034495s Aug 10 23:52:30.762: INFO: Pod "pod-subpath-test-configmap-qzpp": Phase="Running", Reason="", readiness=true. Elapsed: 16.065291465s Aug 10 23:52:32.766: INFO: Pod "pod-subpath-test-configmap-qzpp": Phase="Running", Reason="", readiness=true. Elapsed: 18.069552231s Aug 10 23:52:34.771: INFO: Pod "pod-subpath-test-configmap-qzpp": Phase="Running", Reason="", readiness=true. Elapsed: 20.074040255s Aug 10 23:52:36.775: INFO: Pod "pod-subpath-test-configmap-qzpp": Phase="Running", Reason="", readiness=true. Elapsed: 22.078622621s Aug 10 23:52:38.779: INFO: Pod "pod-subpath-test-configmap-qzpp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.082558132s STEP: Saw pod success Aug 10 23:52:38.779: INFO: Pod "pod-subpath-test-configmap-qzpp" satisfied condition "Succeeded or Failed" Aug 10 23:52:38.782: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-qzpp container test-container-subpath-configmap-qzpp: STEP: delete the pod Aug 10 23:52:38.829: INFO: Waiting for pod pod-subpath-test-configmap-qzpp to disappear Aug 10 23:52:38.835: INFO: Pod pod-subpath-test-configmap-qzpp no longer exists STEP: Deleting pod pod-subpath-test-configmap-qzpp Aug 10 23:52:38.835: INFO: Deleting pod "pod-subpath-test-configmap-qzpp" in namespace "subpath-8851" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:52:38.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8851" for this suite. • [SLOW TEST:24.279 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":303,"completed":92,"skipped":1500,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:52:38.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 10 23:52:39.159: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 10 23:52:44.163: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:52:45.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6785" for this suite. • [SLOW TEST:6.359 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":303,"completed":93,"skipped":1529,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:52:45.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-d0d31370-eaa0-4d39-958e-3af60b8a6977 STEP: Creating a pod to test consume configMaps Aug 10 23:52:45.459: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b584b369-bd9e-40ed-8eec-161071905a47" in namespace "projected-2071" to be "Succeeded or Failed" Aug 10 23:52:45.469: INFO: Pod "pod-projected-configmaps-b584b369-bd9e-40ed-8eec-161071905a47": Phase="Pending", Reason="", readiness=false. Elapsed: 10.168213ms Aug 10 23:52:47.473: INFO: Pod "pod-projected-configmaps-b584b369-bd9e-40ed-8eec-161071905a47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01429173s Aug 10 23:52:49.478: INFO: Pod "pod-projected-configmaps-b584b369-bd9e-40ed-8eec-161071905a47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018898827s STEP: Saw pod success Aug 10 23:52:49.478: INFO: Pod "pod-projected-configmaps-b584b369-bd9e-40ed-8eec-161071905a47" satisfied condition "Succeeded or Failed" Aug 10 23:52:49.481: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-b584b369-bd9e-40ed-8eec-161071905a47 container projected-configmap-volume-test: STEP: delete the pod Aug 10 23:52:49.517: INFO: Waiting for pod pod-projected-configmaps-b584b369-bd9e-40ed-8eec-161071905a47 to disappear Aug 10 23:52:49.524: INFO: Pod pod-projected-configmaps-b584b369-bd9e-40ed-8eec-161071905a47 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:52:49.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2071" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":94,"skipped":1530,"failed":0} ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:52:49.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-ec8f6dba-c97d-4a0a-aa41-22e5f837653c STEP: Creating a pod to test consume secrets Aug 10 23:52:49.658: INFO: Waiting up to 5m0s for pod "pod-secrets-865cad7c-48ec-49ea-9e7b-7b07aaf1de62" in namespace "secrets-4589" to be "Succeeded or Failed" Aug 10 23:52:49.741: INFO: Pod "pod-secrets-865cad7c-48ec-49ea-9e7b-7b07aaf1de62": Phase="Pending", Reason="", readiness=false. Elapsed: 83.033266ms Aug 10 23:52:51.744: INFO: Pod "pod-secrets-865cad7c-48ec-49ea-9e7b-7b07aaf1de62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086409379s Aug 10 23:52:53.747: INFO: Pod "pod-secrets-865cad7c-48ec-49ea-9e7b-7b07aaf1de62": Phase="Running", Reason="", readiness=true. Elapsed: 4.08955782s Aug 10 23:52:55.799: INFO: Pod "pod-secrets-865cad7c-48ec-49ea-9e7b-7b07aaf1de62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.141170941s STEP: Saw pod success Aug 10 23:52:55.799: INFO: Pod "pod-secrets-865cad7c-48ec-49ea-9e7b-7b07aaf1de62" satisfied condition "Succeeded or Failed" Aug 10 23:52:55.802: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-865cad7c-48ec-49ea-9e7b-7b07aaf1de62 container secret-volume-test: STEP: delete the pod Aug 10 23:52:55.823: INFO: Waiting for pod pod-secrets-865cad7c-48ec-49ea-9e7b-7b07aaf1de62 to disappear Aug 10 23:52:55.843: INFO: Pod pod-secrets-865cad7c-48ec-49ea-9e7b-7b07aaf1de62 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:52:55.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4589" for this suite. • [SLOW TEST:6.319 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":95,"skipped":1530,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:52:55.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:53:09.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7544" for this suite. • [SLOW TEST:13.192 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":303,"completed":96,"skipped":1546,"failed":0} SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:53:09.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 10 23:53:09.141: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:53:15.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8147" for this suite. • [SLOW TEST:6.807 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":303,"completed":97,"skipped":1552,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:53:15.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 10 23:53:23.095: INFO: 10 pods remaining Aug 10 23:53:23.095: INFO: 10 pods has nil DeletionTimestamp Aug 10 23:53:23.095: INFO: Aug 10 23:53:24.838: INFO: 0 pods remaining Aug 10 23:53:24.838: INFO: 0 pods has nil DeletionTimestamp Aug 10 23:53:24.838: INFO: Aug 10 23:53:25.955: INFO: 0 pods remaining Aug 10 23:53:25.955: INFO: 0 pods has nil DeletionTimestamp Aug 10 23:53:25.955: INFO: STEP: Gathering metrics W0810 23:53:26.935773 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 10 23:54:28.952: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:54:28.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3667" for this suite. • [SLOW TEST:73.115 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":303,"completed":98,"skipped":1578,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:54:28.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 10 23:54:33.565: INFO: Successfully updated pod "labelsupdatedc2b80ae-495d-4f8a-aa36-64b6c3d48d31" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:54:37.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3723" for this suite. • [SLOW TEST:8.649 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":99,"skipped":1585,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:54:37.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:54:37.748: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-55fe6814-db0e-46bc-8848-39e215c3c0b3" in namespace "security-context-test-6763" to be "Succeeded or Failed" Aug 10 23:54:37.758: INFO: Pod "busybox-privileged-false-55fe6814-db0e-46bc-8848-39e215c3c0b3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.541211ms Aug 10 23:54:39.782: INFO: Pod "busybox-privileged-false-55fe6814-db0e-46bc-8848-39e215c3c0b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034491002s Aug 10 23:54:41.786: INFO: Pod "busybox-privileged-false-55fe6814-db0e-46bc-8848-39e215c3c0b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038651065s Aug 10 23:54:41.786: INFO: Pod "busybox-privileged-false-55fe6814-db0e-46bc-8848-39e215c3c0b3" satisfied condition "Succeeded or Failed" Aug 10 23:54:41.793: INFO: Got logs for pod "busybox-privileged-false-55fe6814-db0e-46bc-8848-39e215c3c0b3": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:54:41.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6763" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":100,"skipped":1596,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:54:41.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 10 23:54:41.853: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 10 23:54:41.869: INFO: Waiting for terminating namespaces to be deleted... Aug 10 23:54:41.896: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 10 23:54:41.903: INFO: coredns-f9fd979d6-s745j from kube-system started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 10 23:54:41.903: INFO: Container coredns ready: true, restart count 0 Aug 10 23:54:41.903: INFO: coredns-f9fd979d6-zs4sj from kube-system started at 2020-07-19 21:39:36 +0000 UTC (1 container statuses recorded) Aug 10 23:54:41.903: INFO: Container coredns ready: true, restart count 0 Aug 10 23:54:41.903: INFO: kindnet-46dnt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 10 23:54:41.903: INFO: Container kindnet-cni ready: true, restart count 0 Aug 10 23:54:41.903: INFO: kube-proxy-sxpg9 from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 10 23:54:41.903: INFO: Container kube-proxy ready: true, restart count 0 Aug 10 23:54:41.903: INFO: local-path-provisioner-8b46957d4-2gzpd from local-path-storage started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 10 23:54:41.903: INFO: Container local-path-provisioner ready: true, restart count 0 Aug 10 23:54:41.903: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 10 23:54:41.908: INFO: labelsupdatedc2b80ae-495d-4f8a-aa36-64b6c3d48d31 from downward-api-3723 started at 2020-08-10 23:54:29 +0000 UTC (1 container statuses recorded) Aug 10 23:54:41.908: INFO: Container client-container ready: true, restart count 0 Aug 10 23:54:41.908: INFO: kindnet-g6zbt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 10 23:54:41.908: INFO: Container kindnet-cni ready: true, restart count 0 Aug 10 23:54:41.908: INFO: kube-proxy-nsnzn from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 10 23:54:41.908: INFO: Container kube-proxy ready: true, restart count 0 Aug 10 23:54:41.908: INFO: busybox-privileged-false-55fe6814-db0e-46bc-8848-39e215c3c0b3 from security-context-test-6763 started at 2020-08-10 23:54:37 +0000 UTC (1 container statuses recorded) Aug 10 23:54:41.908: INFO: Container busybox-privileged-false-55fe6814-db0e-46bc-8848-39e215c3c0b3 ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-fa55d8f6-a08d-471e-a6e4-04c4676eb838 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-fa55d8f6-a08d-471e-a6e4-04c4676eb838 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-fa55d8f6-a08d-471e-a6e4-04c4676eb838 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:54:50.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8040" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.303 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":303,"completed":101,"skipped":1600,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:54:50.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 10 23:54:50.974: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 10 23:54:52.983: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732700491, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732700491, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732700491, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732700490, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 23:54:56.042: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 23:54:56.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:54:57.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7033" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.298 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":303,"completed":102,"skipped":1644,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:54:57.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 10 23:55:01.550: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:55:01.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2726" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":103,"skipped":1648,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:55:01.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Aug 10 23:55:01.740: INFO: Waiting up to 5m0s for pod "pod-96583b26-46dc-495a-bf4c-22d95352db00" in namespace "emptydir-6348" to be "Succeeded or Failed" Aug 10 23:55:01.760: INFO: Pod "pod-96583b26-46dc-495a-bf4c-22d95352db00": Phase="Pending", Reason="", readiness=false. Elapsed: 19.281824ms Aug 10 23:55:03.764: INFO: Pod "pod-96583b26-46dc-495a-bf4c-22d95352db00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023303799s Aug 10 23:55:05.766: INFO: Pod "pod-96583b26-46dc-495a-bf4c-22d95352db00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026187949s STEP: Saw pod success Aug 10 23:55:05.767: INFO: Pod "pod-96583b26-46dc-495a-bf4c-22d95352db00" satisfied condition "Succeeded or Failed" Aug 10 23:55:05.768: INFO: Trying to get logs from node latest-worker2 pod pod-96583b26-46dc-495a-bf4c-22d95352db00 container test-container: STEP: delete the pod Aug 10 23:55:05.797: INFO: Waiting for pod pod-96583b26-46dc-495a-bf4c-22d95352db00 to disappear Aug 10 23:55:05.817: INFO: Pod pod-96583b26-46dc-495a-bf4c-22d95352db00 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:55:05.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6348" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":104,"skipped":1670,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:55:05.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3552 STEP: creating service affinity-nodeport in namespace services-3552 STEP: creating replication controller affinity-nodeport in namespace services-3552 I0810 23:55:06.043120 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-3552, replica count: 3 I0810 23:55:09.093531 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 23:55:12.093714 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 10 23:55:12.104: INFO: Creating new exec pod Aug 10 23:55:17.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-3552 execpod-affinityhj7rp -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Aug 10 23:55:17.389: INFO: stderr: "I0810 23:55:17.269900 1773 log.go:181] (0xc00003a4d0) (0xc00072ab40) Create stream\nI0810 23:55:17.269984 1773 log.go:181] (0xc00003a4d0) (0xc00072ab40) Stream added, broadcasting: 1\nI0810 23:55:17.272268 1773 log.go:181] (0xc00003a4d0) Reply frame received for 1\nI0810 23:55:17.272336 1773 log.go:181] (0xc00003a4d0) (0xc00072b540) Create stream\nI0810 23:55:17.272372 1773 log.go:181] (0xc00003a4d0) (0xc00072b540) Stream added, broadcasting: 3\nI0810 23:55:17.273545 1773 log.go:181] (0xc00003a4d0) Reply frame received for 3\nI0810 23:55:17.273594 1773 log.go:181] (0xc00003a4d0) (0xc0002eea00) Create stream\nI0810 23:55:17.273607 1773 log.go:181] (0xc00003a4d0) (0xc0002eea00) Stream added, broadcasting: 5\nI0810 23:55:17.274826 1773 log.go:181] (0xc00003a4d0) Reply frame received for 5\nI0810 23:55:17.380224 1773 log.go:181] (0xc00003a4d0) Data frame received for 5\nI0810 23:55:17.380260 1773 log.go:181] (0xc0002eea00) (5) Data frame handling\nI0810 23:55:17.380279 1773 log.go:181] (0xc0002eea00) (5) Data frame sent\nI0810 23:55:17.380295 1773 log.go:181] (0xc00003a4d0) Data frame received for 5\nI0810 23:55:17.380304 1773 log.go:181] (0xc0002eea00) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0810 23:55:17.380345 1773 log.go:181] (0xc0002eea00) (5) Data frame sent\nI0810 23:55:17.380357 1773 log.go:181] (0xc00003a4d0) Data frame received for 3\nI0810 23:55:17.380363 1773 log.go:181] (0xc00072b540) (3) Data frame handling\nI0810 23:55:17.380571 1773 log.go:181] (0xc00003a4d0) Data frame received for 5\nI0810 23:55:17.380589 1773 log.go:181] (0xc0002eea00) (5) Data frame handling\nI0810 23:55:17.382336 1773 log.go:181] (0xc00003a4d0) Data frame received for 1\nI0810 23:55:17.382368 1773 log.go:181] (0xc00072ab40) (1) Data frame handling\nI0810 23:55:17.382397 1773 log.go:181] (0xc00072ab40) (1) Data frame sent\nI0810 23:55:17.382420 1773 log.go:181] (0xc00003a4d0) (0xc00072ab40) Stream removed, broadcasting: 1\nI0810 23:55:17.382442 1773 log.go:181] (0xc00003a4d0) Go away received\nI0810 23:55:17.382766 1773 log.go:181] (0xc00003a4d0) (0xc00072ab40) Stream removed, broadcasting: 1\nI0810 23:55:17.382793 1773 log.go:181] (0xc00003a4d0) (0xc00072b540) Stream removed, broadcasting: 3\nI0810 23:55:17.382803 1773 log.go:181] (0xc00003a4d0) (0xc0002eea00) Stream removed, broadcasting: 5\n" Aug 10 23:55:17.389: INFO: stdout: "" Aug 10 23:55:17.389: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-3552 execpod-affinityhj7rp -- /bin/sh -x -c nc -zv -t -w 2 10.108.64.87 80' Aug 10 23:55:17.596: INFO: stderr: "I0810 23:55:17.515291 1787 log.go:181] (0xc00053d290) (0xc000d046e0) Create stream\nI0810 23:55:17.515334 1787 log.go:181] (0xc00053d290) (0xc000d046e0) Stream added, broadcasting: 1\nI0810 23:55:17.519112 1787 log.go:181] (0xc00053d290) Reply frame received for 1\nI0810 23:55:17.519151 1787 log.go:181] (0xc00053d290) (0xc000137c20) Create stream\nI0810 23:55:17.519187 1787 log.go:181] (0xc00053d290) (0xc000137c20) Stream added, broadcasting: 3\nI0810 23:55:17.520087 1787 log.go:181] (0xc00053d290) Reply frame received for 3\nI0810 23:55:17.520121 1787 log.go:181] (0xc00053d290) (0xc000a440a0) Create stream\nI0810 23:55:17.520133 1787 log.go:181] (0xc00053d290) (0xc000a440a0) Stream added, broadcasting: 5\nI0810 23:55:17.521097 1787 log.go:181] (0xc00053d290) Reply frame received for 5\nI0810 23:55:17.588353 1787 log.go:181] (0xc00053d290) Data frame received for 5\nI0810 23:55:17.588398 1787 log.go:181] (0xc000a440a0) (5) Data frame handling\nI0810 23:55:17.588424 1787 log.go:181] (0xc000a440a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.108.64.87 80\nConnection to 10.108.64.87 80 port [tcp/http] succeeded!\nI0810 23:55:17.588464 1787 log.go:181] (0xc00053d290) Data frame received for 3\nI0810 23:55:17.588474 1787 log.go:181] (0xc000137c20) (3) Data frame handling\nI0810 23:55:17.588556 1787 log.go:181] (0xc00053d290) Data frame received for 5\nI0810 23:55:17.588571 1787 log.go:181] (0xc000a440a0) (5) Data frame handling\nI0810 23:55:17.589988 1787 log.go:181] (0xc00053d290) Data frame received for 1\nI0810 23:55:17.590017 1787 log.go:181] (0xc000d046e0) (1) Data frame handling\nI0810 23:55:17.590037 1787 log.go:181] (0xc000d046e0) (1) Data frame sent\nI0810 23:55:17.590059 1787 log.go:181] (0xc00053d290) (0xc000d046e0) Stream removed, broadcasting: 1\nI0810 23:55:17.590084 1787 log.go:181] (0xc00053d290) Go away received\nI0810 23:55:17.590510 1787 log.go:181] (0xc00053d290) (0xc000d046e0) Stream removed, broadcasting: 1\nI0810 23:55:17.590532 1787 log.go:181] (0xc00053d290) (0xc000137c20) Stream removed, broadcasting: 3\nI0810 23:55:17.590542 1787 log.go:181] (0xc00053d290) (0xc000a440a0) Stream removed, broadcasting: 5\n" Aug 10 23:55:17.596: INFO: stdout: "" Aug 10 23:55:17.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-3552 execpod-affinityhj7rp -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30498' Aug 10 23:55:17.817: INFO: stderr: "I0810 23:55:17.739156 1805 log.go:181] (0xc00003a160) (0xc000d0b900) Create stream\nI0810 23:55:17.739207 1805 log.go:181] (0xc00003a160) (0xc000d0b900) Stream added, broadcasting: 1\nI0810 23:55:17.740713 1805 log.go:181] (0xc00003a160) Reply frame received for 1\nI0810 23:55:17.740843 1805 log.go:181] (0xc00003a160) (0xc000d03040) Create stream\nI0810 23:55:17.740855 1805 log.go:181] (0xc00003a160) (0xc000d03040) Stream added, broadcasting: 3\nI0810 23:55:17.741652 1805 log.go:181] (0xc00003a160) Reply frame received for 3\nI0810 23:55:17.741674 1805 log.go:181] (0xc00003a160) (0xc000b80820) Create stream\nI0810 23:55:17.741681 1805 log.go:181] (0xc00003a160) (0xc000b80820) Stream added, broadcasting: 5\nI0810 23:55:17.742389 1805 log.go:181] (0xc00003a160) Reply frame received for 5\nI0810 23:55:17.809413 1805 log.go:181] (0xc00003a160) Data frame received for 5\nI0810 23:55:17.809463 1805 log.go:181] (0xc000b80820) (5) Data frame handling\nI0810 23:55:17.809487 1805 log.go:181] (0xc000b80820) (5) Data frame sent\nI0810 23:55:17.809506 1805 log.go:181] (0xc00003a160) Data frame received for 5\nI0810 23:55:17.809524 1805 log.go:181] (0xc000b80820) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 30498\nConnection to 172.18.0.14 30498 port [tcp/30498] succeeded!\nI0810 23:55:17.809562 1805 log.go:181] (0xc00003a160) Data frame received for 3\nI0810 23:55:17.809583 1805 log.go:181] (0xc000d03040) (3) Data frame handling\nI0810 23:55:17.811141 1805 log.go:181] (0xc00003a160) Data frame received for 1\nI0810 23:55:17.811164 1805 log.go:181] (0xc000d0b900) (1) Data frame handling\nI0810 23:55:17.811183 1805 log.go:181] (0xc000d0b900) (1) Data frame sent\nI0810 23:55:17.811201 1805 log.go:181] (0xc00003a160) (0xc000d0b900) Stream removed, broadcasting: 1\nI0810 23:55:17.811235 1805 log.go:181] (0xc00003a160) Go away received\nI0810 23:55:17.811615 1805 log.go:181] (0xc00003a160) (0xc000d0b900) Stream removed, broadcasting: 1\nI0810 23:55:17.811637 1805 log.go:181] (0xc00003a160) (0xc000d03040) Stream removed, broadcasting: 3\nI0810 23:55:17.811648 1805 log.go:181] (0xc00003a160) (0xc000b80820) Stream removed, broadcasting: 5\n" Aug 10 23:55:17.817: INFO: stdout: "" Aug 10 23:55:17.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-3552 execpod-affinityhj7rp -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30498' Aug 10 23:55:18.047: INFO: stderr: "I0810 23:55:17.948810 1823 log.go:181] (0xc000e8ebb0) (0xc0004e3f40) Create stream\nI0810 23:55:17.948862 1823 log.go:181] (0xc000e8ebb0) (0xc0004e3f40) Stream added, broadcasting: 1\nI0810 23:55:17.951459 1823 log.go:181] (0xc000e8ebb0) Reply frame received for 1\nI0810 23:55:17.951518 1823 log.go:181] (0xc000e8ebb0) (0xc0009a0dc0) Create stream\nI0810 23:55:17.951543 1823 log.go:181] (0xc000e8ebb0) (0xc0009a0dc0) Stream added, broadcasting: 3\nI0810 23:55:17.952360 1823 log.go:181] (0xc000e8ebb0) Reply frame received for 3\nI0810 23:55:17.952382 1823 log.go:181] (0xc000e8ebb0) (0xc00047e640) Create stream\nI0810 23:55:17.952390 1823 log.go:181] (0xc000e8ebb0) (0xc00047e640) Stream added, broadcasting: 5\nI0810 23:55:17.953312 1823 log.go:181] (0xc000e8ebb0) Reply frame received for 5\nI0810 23:55:18.035750 1823 log.go:181] (0xc000e8ebb0) Data frame received for 3\nI0810 23:55:18.035802 1823 log.go:181] (0xc0009a0dc0) (3) Data frame handling\nI0810 23:55:18.040978 1823 log.go:181] (0xc000e8ebb0) Data frame received for 5\nI0810 23:55:18.041004 1823 log.go:181] (0xc00047e640) (5) Data frame handling\nI0810 23:55:18.041021 1823 log.go:181] (0xc00047e640) (5) Data frame sent\nI0810 23:55:18.041028 1823 log.go:181] (0xc000e8ebb0) Data frame received for 5\nI0810 23:55:18.041033 1823 log.go:181] (0xc00047e640) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 30498\nConnection to 172.18.0.12 30498 port [tcp/30498] succeeded!\nI0810 23:55:18.042643 1823 log.go:181] (0xc000e8ebb0) Data frame received for 1\nI0810 23:55:18.042660 1823 log.go:181] (0xc0004e3f40) (1) Data frame handling\nI0810 23:55:18.042675 1823 log.go:181] (0xc0004e3f40) (1) Data frame sent\nI0810 23:55:18.042693 1823 log.go:181] (0xc000e8ebb0) (0xc0004e3f40) Stream removed, broadcasting: 1\nI0810 23:55:18.042707 1823 log.go:181] (0xc000e8ebb0) Go away received\nI0810 23:55:18.043122 1823 log.go:181] (0xc000e8ebb0) (0xc0004e3f40) Stream removed, broadcasting: 1\nI0810 23:55:18.043143 1823 log.go:181] (0xc000e8ebb0) (0xc0009a0dc0) Stream removed, broadcasting: 3\nI0810 23:55:18.043152 1823 log.go:181] (0xc000e8ebb0) (0xc00047e640) Stream removed, broadcasting: 5\n" Aug 10 23:55:18.047: INFO: stdout: "" Aug 10 23:55:18.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-3552 execpod-affinityhj7rp -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:30498/ ; done' Aug 10 23:55:18.351: INFO: stderr: "I0810 23:55:18.181368 1841 log.go:181] (0xc0009d4fd0) (0xc000795900) Create stream\nI0810 23:55:18.181448 1841 log.go:181] (0xc0009d4fd0) (0xc000795900) Stream added, broadcasting: 1\nI0810 23:55:18.188250 1841 log.go:181] (0xc0009d4fd0) Reply frame received for 1\nI0810 23:55:18.188285 1841 log.go:181] (0xc0009d4fd0) (0xc0004f90e0) Create stream\nI0810 23:55:18.188292 1841 log.go:181] (0xc0009d4fd0) (0xc0004f90e0) Stream added, broadcasting: 3\nI0810 23:55:18.190475 1841 log.go:181] (0xc0009d4fd0) Reply frame received for 3\nI0810 23:55:18.190515 1841 log.go:181] (0xc0009d4fd0) (0xc00044a0a0) Create stream\nI0810 23:55:18.190524 1841 log.go:181] (0xc0009d4fd0) (0xc00044a0a0) Stream added, broadcasting: 5\nI0810 23:55:18.191210 1841 log.go:181] (0xc0009d4fd0) Reply frame received for 5\nI0810 23:55:18.250288 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.250333 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.250345 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.250375 1841 log.go:181] (0xc0009d4fd0) Data frame received for 5\nI0810 23:55:18.250389 1841 log.go:181] (0xc00044a0a0) (5) Data frame handling\nI0810 23:55:18.250398 1841 log.go:181] (0xc00044a0a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30498/\nI0810 23:55:18.250806 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.250831 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.250855 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.251127 1841 log.go:181] (0xc0009d4fd0) Data frame received for 5\nI0810 23:55:18.251147 1841 log.go:181] (0xc00044a0a0) (5) Data frame handling\nI0810 23:55:18.251155 1841 log.go:181] (0xc00044a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30498/\nI0810 23:55:18.251208 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.251218 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.251225 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.256047 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.256071 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.256093 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.256510 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.256525 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.256539 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.256556 1841 log.go:181] (0xc0009d4fd0) Data frame received for 5\nI0810 23:55:18.256570 1841 log.go:181] (0xc00044a0a0) (5) Data frame handling\nI0810 23:55:18.256581 1841 log.go:181] (0xc00044a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30498/\nI0810 23:55:18.262672 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.262702 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.262721 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.263341 1841 log.go:181] (0xc0009d4fd0) Data frame received for 5\nI0810 23:55:18.263367 1841 log.go:181] (0xc00044a0a0) (5) Data frame handling\nI0810 23:55:18.263375 1841 log.go:181] (0xc00044a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30498/\nI0810 23:55:18.263419 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.263444 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.263460 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.269834 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.269864 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.269886 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.270480 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.270501 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.270511 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.270530 1841 log.go:181] (0xc0009d4fd0) Data frame received for 5\nI0810 23:55:18.270553 1841 log.go:181] (0xc00044a0a0) (5) Data frame handling\nI0810 23:55:18.270565 1841 log.go:181] (0xc00044a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30498/\nI0810 23:55:18.276426 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.276449 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.276464 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.277265 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.277299 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.277314 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.277339 1841 log.go:181] (0xc0009d4fd0) Data frame received for 5\nI0810 23:55:18.277352 1841 log.go:181] (0xc00044a0a0) (5) Data frame handling\nI0810 23:55:18.277365 1841 log.go:181] (0xc00044a0a0) (5) Data frame sent\nI0810 23:55:18.277376 1841 log.go:181] (0xc0009d4fd0) Data frame received for 5\nI0810 23:55:18.277388 1841 log.go:181] (0xc00044a0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30498/\nI0810 23:55:18.277413 1841 log.go:181] (0xc00044a0a0) (5) Data frame sent\nI0810 23:55:18.283717 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.283746 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.283764 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.284419 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.284435 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.284441 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.284456 1841 log.go:181] (0xc0009d4fd0) Data frame received for 5\nI0810 23:55:18.284465 1841 log.go:181] (0xc00044a0a0) (5) Data frame handling\nI0810 23:55:18.284472 1841 log.go:181] (0xc00044a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30498/\nI0810 23:55:18.289095 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.289122 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.289150 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.289842 1841 log.go:181] (0xc0009d4fd0) Data frame received for 5\nI0810 23:55:18.289873 1841 log.go:181] (0xc00044a0a0) (5) Data frame handling\nI0810 23:55:18.289890 1841 log.go:181] (0xc00044a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30498/\nI0810 23:55:18.289909 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.289925 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.289939 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.295210 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.295228 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.295250 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.295783 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.295818 1841 log.go:181] (0xc0009d4fd0) Data frame received for 5\nI0810 23:55:18.295848 1841 log.go:181] (0xc00044a0a0) (5) Data frame handling\nI0810 23:55:18.295874 1841 log.go:181] (0xc00044a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30498/\nI0810 23:55:18.295890 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.295903 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.300111 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.300127 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.300150 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.301143 1841 log.go:181] (0xc0009d4fd0) Data frame received for 5\nI0810 23:55:18.301160 1841 log.go:181] (0xc00044a0a0) (5) Data frame handling\nI0810 23:55:18.301180 1841 log.go:181] (0xc00044a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30498/\nI0810 23:55:18.301189 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.301195 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.301204 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.305280 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.305357 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.305423 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.305903 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.305928 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.305944 1841 log.go:181] (0xc0009d4fd0) Data frame received for 5\nI0810 23:55:18.305960 1841 log.go:181] (0xc00044a0a0) (5) Data frame handling\nI0810 23:55:18.305972 1841 log.go:181] (0xc00044a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30498/\nI0810 23:55:18.305989 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.310848 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.310869 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.310886 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.311510 1841 log.go:181] (0xc0009d4fd0) Data frame received for 5\nI0810 23:55:18.311542 1841 log.go:181] (0xc00044a0a0) (5) Data frame handling\nI0810 23:55:18.311558 1841 log.go:181] (0xc00044a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30498/\nI0810 23:55:18.311609 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.311659 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.311685 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.318384 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.318404 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.318422 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.319342 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.319363 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.319373 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.319402 1841 log.go:181] (0xc0009d4fd0) Data frame received for 5\nI0810 23:55:18.319425 1841 log.go:181] (0xc00044a0a0) (5) Data frame handling\nI0810 23:55:18.319442 1841 log.go:181] (0xc00044a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30498/\nI0810 23:55:18.324847 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.324873 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.324883 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.325473 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.325500 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.325535 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.325628 1841 log.go:181] (0xc0009d4fd0) Data frame received for 5\nI0810 23:55:18.325656 1841 log.go:181] (0xc00044a0a0) (5) Data frame handling\nI0810 23:55:18.325679 1841 log.go:181] (0xc00044a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30498/\nI0810 23:55:18.331114 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.331131 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.331141 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.331703 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.331734 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.331748 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.331768 1841 log.go:181] (0xc0009d4fd0) Data frame received for 5\nI0810 23:55:18.331777 1841 log.go:181] (0xc00044a0a0) (5) Data frame handling\nI0810 23:55:18.331785 1841 log.go:181] (0xc00044a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30498/\nI0810 23:55:18.340089 1841 log.go:181] (0xc0009d4fd0) Data frame received for 5\nI0810 23:55:18.340131 1841 log.go:181] (0xc00044a0a0) (5) Data frame handling\nI0810 23:55:18.340149 1841 log.go:181] (0xc00044a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30498/\nI0810 23:55:18.340173 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.340189 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.340215 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.340231 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.340246 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.340280 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.344935 1841 log.go:181] (0xc0009d4fd0) Data frame received for 5\nI0810 23:55:18.344975 1841 log.go:181] (0xc00044a0a0) (5) Data frame handling\nI0810 23:55:18.345013 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.345036 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.345065 1841 log.go:181] (0xc0004f90e0) (3) Data frame sent\nI0810 23:55:18.345104 1841 log.go:181] (0xc0009d4fd0) Data frame received for 3\nI0810 23:55:18.345121 1841 log.go:181] (0xc0004f90e0) (3) Data frame handling\nI0810 23:55:18.346770 1841 log.go:181] (0xc0009d4fd0) Data frame received for 1\nI0810 23:55:18.346782 1841 log.go:181] (0xc000795900) (1) Data frame handling\nI0810 23:55:18.346793 1841 log.go:181] (0xc000795900) (1) Data frame sent\nI0810 23:55:18.346802 1841 log.go:181] (0xc0009d4fd0) (0xc000795900) Stream removed, broadcasting: 1\nI0810 23:55:18.346814 1841 log.go:181] (0xc0009d4fd0) Go away received\nI0810 23:55:18.347249 1841 log.go:181] (0xc0009d4fd0) (0xc000795900) Stream removed, broadcasting: 1\nI0810 23:55:18.347270 1841 log.go:181] (0xc0009d4fd0) (0xc0004f90e0) Stream removed, broadcasting: 3\nI0810 23:55:18.347279 1841 log.go:181] (0xc0009d4fd0) (0xc00044a0a0) Stream removed, broadcasting: 5\n" Aug 10 23:55:18.352: INFO: stdout: "\naffinity-nodeport-jnfhf\naffinity-nodeport-jnfhf\naffinity-nodeport-jnfhf\naffinity-nodeport-jnfhf\naffinity-nodeport-jnfhf\naffinity-nodeport-jnfhf\naffinity-nodeport-jnfhf\naffinity-nodeport-jnfhf\naffinity-nodeport-jnfhf\naffinity-nodeport-jnfhf\naffinity-nodeport-jnfhf\naffinity-nodeport-jnfhf\naffinity-nodeport-jnfhf\naffinity-nodeport-jnfhf\naffinity-nodeport-jnfhf\naffinity-nodeport-jnfhf" Aug 10 23:55:18.352: INFO: Received response from host: affinity-nodeport-jnfhf Aug 10 23:55:18.352: INFO: Received response from host: affinity-nodeport-jnfhf Aug 10 23:55:18.352: INFO: Received response from host: affinity-nodeport-jnfhf Aug 10 23:55:18.352: INFO: Received response from host: affinity-nodeport-jnfhf Aug 10 23:55:18.352: INFO: Received response from host: affinity-nodeport-jnfhf Aug 10 23:55:18.352: INFO: Received response from host: affinity-nodeport-jnfhf Aug 10 23:55:18.352: INFO: Received response from host: affinity-nodeport-jnfhf Aug 10 23:55:18.352: INFO: Received response from host: affinity-nodeport-jnfhf Aug 10 23:55:18.352: INFO: Received response from host: affinity-nodeport-jnfhf Aug 10 23:55:18.352: INFO: Received response from host: affinity-nodeport-jnfhf Aug 10 23:55:18.352: INFO: Received response from host: affinity-nodeport-jnfhf Aug 10 23:55:18.352: INFO: Received response from host: affinity-nodeport-jnfhf Aug 10 23:55:18.352: INFO: Received response from host: affinity-nodeport-jnfhf Aug 10 23:55:18.352: INFO: Received response from host: affinity-nodeport-jnfhf Aug 10 23:55:18.352: INFO: Received response from host: affinity-nodeport-jnfhf Aug 10 23:55:18.352: INFO: Received response from host: affinity-nodeport-jnfhf Aug 10 23:55:18.352: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-3552, will wait for the garbage collector to delete the pods Aug 10 23:55:18.744: INFO: Deleting ReplicationController affinity-nodeport took: 226.244512ms Aug 10 23:55:19.244: INFO: Terminating ReplicationController affinity-nodeport pods took: 500.217817ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:55:33.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3552" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:28.106 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":105,"skipped":1691,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:55:33.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:55:38.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-627" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":106,"skipped":1754,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:55:38.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Aug 10 23:55:38.129: INFO: Waiting up to 5m0s for pod "var-expansion-760334f4-21d6-44ee-96fd-0c6e0d9f365f" in namespace "var-expansion-410" to be "Succeeded or Failed" Aug 10 23:55:38.132: INFO: Pod "var-expansion-760334f4-21d6-44ee-96fd-0c6e0d9f365f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.97569ms Aug 10 23:55:40.311: INFO: Pod "var-expansion-760334f4-21d6-44ee-96fd-0c6e0d9f365f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182146845s Aug 10 23:55:42.315: INFO: Pod "var-expansion-760334f4-21d6-44ee-96fd-0c6e0d9f365f": Phase="Running", Reason="", readiness=true. Elapsed: 4.185719966s Aug 10 23:55:44.319: INFO: Pod "var-expansion-760334f4-21d6-44ee-96fd-0c6e0d9f365f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.190330836s STEP: Saw pod success Aug 10 23:55:44.319: INFO: Pod "var-expansion-760334f4-21d6-44ee-96fd-0c6e0d9f365f" satisfied condition "Succeeded or Failed" Aug 10 23:55:44.323: INFO: Trying to get logs from node latest-worker2 pod var-expansion-760334f4-21d6-44ee-96fd-0c6e0d9f365f container dapi-container: STEP: delete the pod Aug 10 23:55:44.343: INFO: Waiting for pod var-expansion-760334f4-21d6-44ee-96fd-0c6e0d9f365f to disappear Aug 10 23:55:44.406: INFO: Pod var-expansion-760334f4-21d6-44ee-96fd-0c6e0d9f365f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:55:44.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-410" for this suite. • [SLOW TEST:6.427 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":303,"completed":107,"skipped":1781,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:55:44.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-2c8ce459-2ea3-4ba8-8f38-65ccc30a865e STEP: Creating a pod to test consume secrets Aug 10 23:55:44.605: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c76b9fc5-0d2b-4fc6-a536-19c1f5c14f8e" in namespace "projected-513" to be "Succeeded or Failed" Aug 10 23:55:44.622: INFO: Pod "pod-projected-secrets-c76b9fc5-0d2b-4fc6-a536-19c1f5c14f8e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.808313ms Aug 10 23:55:46.626: INFO: Pod "pod-projected-secrets-c76b9fc5-0d2b-4fc6-a536-19c1f5c14f8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021538991s Aug 10 23:55:48.631: INFO: Pod "pod-projected-secrets-c76b9fc5-0d2b-4fc6-a536-19c1f5c14f8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026151259s STEP: Saw pod success Aug 10 23:55:48.631: INFO: Pod "pod-projected-secrets-c76b9fc5-0d2b-4fc6-a536-19c1f5c14f8e" satisfied condition "Succeeded or Failed" Aug 10 23:55:48.633: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-c76b9fc5-0d2b-4fc6-a536-19c1f5c14f8e container projected-secret-volume-test: STEP: delete the pod Aug 10 23:55:48.875: INFO: Waiting for pod pod-projected-secrets-c76b9fc5-0d2b-4fc6-a536-19c1f5c14f8e to disappear Aug 10 23:55:48.903: INFO: Pod pod-projected-secrets-c76b9fc5-0d2b-4fc6-a536-19c1f5c14f8e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:55:48.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-513" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":108,"skipped":1792,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:55:48.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 10 23:55:49.020: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:55:56.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8534" for this suite. • [SLOW TEST:7.528 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":303,"completed":109,"skipped":1799,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:55:56.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 10 23:55:56.558: INFO: Waiting up to 1m0s for all nodes to be ready Aug 10 23:56:56.577: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Aug 10 23:56:56.608: INFO: Created pod: pod0-sched-preemption-low-priority Aug 10 23:56:56.696: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:57:28.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8097" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:92.426 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":303,"completed":110,"skipped":1817,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:57:28.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 10 23:57:33.560: INFO: Successfully updated pod "annotationupdate56af43ae-0d00-4463-8ffa-787a5814e516" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:57:35.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9232" for this suite. • [SLOW TEST:6.893 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":111,"skipped":1822,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:57:35.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7564.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7564.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7564.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7564.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7564.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7564.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7564.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7564.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7564.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7564.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 10 23:57:43.957: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:43.960: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:43.962: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:43.965: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:43.973: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:43.977: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:43.980: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:43.987: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:43.994: INFO: Lookups using dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7564.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7564.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local jessie_udp@dns-test-service-2.dns-7564.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7564.svc.cluster.local] Aug 10 23:57:48.999: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:49.003: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:49.006: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:49.009: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:49.019: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:49.022: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:49.024: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:49.027: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:49.051: INFO: Lookups using dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7564.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7564.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local jessie_udp@dns-test-service-2.dns-7564.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7564.svc.cluster.local] Aug 10 23:57:53.998: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:54.002: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:54.004: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:54.031: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:54.040: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:54.043: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:54.046: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:54.049: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:54.053: INFO: Lookups using dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7564.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7564.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local jessie_udp@dns-test-service-2.dns-7564.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7564.svc.cluster.local] Aug 10 23:57:58.999: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:59.003: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:59.008: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:59.012: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:59.021: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:59.023: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:59.026: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:59.029: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:57:59.035: INFO: Lookups using dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7564.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7564.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local jessie_udp@dns-test-service-2.dns-7564.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7564.svc.cluster.local] Aug 10 23:58:03.999: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:58:04.003: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:58:04.007: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:58:04.011: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:58:04.020: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:58:04.023: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:58:04.025: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:58:04.027: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:58:04.033: INFO: Lookups using dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7564.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7564.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local jessie_udp@dns-test-service-2.dns-7564.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7564.svc.cluster.local] Aug 10 23:58:08.998: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:58:09.001: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:58:09.004: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:58:09.007: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:58:09.015: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:58:09.018: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:58:09.021: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:58:09.024: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7564.svc.cluster.local from pod dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27: the server could not find the requested resource (get pods dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27) Aug 10 23:58:09.030: INFO: Lookups using dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7564.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7564.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7564.svc.cluster.local jessie_udp@dns-test-service-2.dns-7564.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7564.svc.cluster.local] Aug 10 23:58:14.037: INFO: DNS probes using dns-7564/dns-test-4e65bfe2-e596-46a1-b4e4-55714a8d7e27 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:58:14.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7564" for this suite. • [SLOW TEST:38.954 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":303,"completed":112,"skipped":1866,"failed":0} SSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:58:14.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 23:58:14.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8446" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":113,"skipped":1871,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 23:58:14.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 10 23:58:14.999: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 10 23:58:15.015: INFO: Waiting for terminating namespaces to be deleted... Aug 10 23:58:15.037: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 10 23:58:15.043: INFO: coredns-f9fd979d6-s745j from kube-system started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 10 23:58:15.043: INFO: Container coredns ready: true, restart count 0 Aug 10 23:58:15.043: INFO: coredns-f9fd979d6-zs4sj from kube-system started at 2020-07-19 21:39:36 +0000 UTC (1 container statuses recorded) Aug 10 23:58:15.043: INFO: Container coredns ready: true, restart count 0 Aug 10 23:58:15.043: INFO: kindnet-46dnt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 10 23:58:15.043: INFO: Container kindnet-cni ready: true, restart count 0 Aug 10 23:58:15.043: INFO: kube-proxy-sxpg9 from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 10 23:58:15.043: INFO: Container kube-proxy ready: true, restart count 0 Aug 10 23:58:15.043: INFO: local-path-provisioner-8b46957d4-2gzpd from local-path-storage started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 10 23:58:15.043: INFO: Container local-path-provisioner ready: true, restart count 0 Aug 10 23:58:15.043: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 10 23:58:15.053: INFO: kindnet-g6zbt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 10 23:58:15.053: INFO: Container kindnet-cni ready: true, restart count 0 Aug 10 23:58:15.053: INFO: kube-proxy-nsnzn from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 10 23:58:15.053: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-41ee8840-ad1d-4c76-ac76-ceeda345a545 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-41ee8840-ad1d-4c76-ac76-ceeda345a545 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-41ee8840-ad1d-4c76-ac76-ceeda345a545 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:03:23.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5054" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.746 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":303,"completed":114,"skipped":1886,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:03:23.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-a0d7dade-948f-461a-8340-43a42fe54292 STEP: Creating a pod to test consume configMaps Aug 11 00:03:23.760: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-074cc727-8e6a-47c2-80dc-1e37944a8407" in namespace "projected-1441" to be "Succeeded or Failed" Aug 11 00:03:23.763: INFO: Pod "pod-projected-configmaps-074cc727-8e6a-47c2-80dc-1e37944a8407": Phase="Pending", Reason="", readiness=false. Elapsed: 2.954738ms Aug 11 00:03:25.768: INFO: Pod "pod-projected-configmaps-074cc727-8e6a-47c2-80dc-1e37944a8407": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007466808s Aug 11 00:03:27.772: INFO: Pod "pod-projected-configmaps-074cc727-8e6a-47c2-80dc-1e37944a8407": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011550701s STEP: Saw pod success Aug 11 00:03:27.772: INFO: Pod "pod-projected-configmaps-074cc727-8e6a-47c2-80dc-1e37944a8407" satisfied condition "Succeeded or Failed" Aug 11 00:03:27.775: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-074cc727-8e6a-47c2-80dc-1e37944a8407 container projected-configmap-volume-test: STEP: delete the pod Aug 11 00:03:27.846: INFO: Waiting for pod pod-projected-configmaps-074cc727-8e6a-47c2-80dc-1e37944a8407 to disappear Aug 11 00:03:27.854: INFO: Pod pod-projected-configmaps-074cc727-8e6a-47c2-80dc-1e37944a8407 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:03:27.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1441" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":115,"skipped":1888,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:03:27.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Aug 11 00:03:27.960: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config api-versions' Aug 11 00:03:28.156: INFO: stderr: "" Aug 11 00:03:28.156: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:03:28.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8489" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":303,"completed":116,"skipped":1897,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:03:28.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Aug 11 00:03:28.247: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config cluster-info' Aug 11 00:03:32.082: INFO: stderr: "" Aug 11 00:03:32.082: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:42901\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:42901/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:03:32.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7409" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":303,"completed":117,"skipped":1903,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:03:32.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-00679051-3411-4e18-ae2b-797831dc71b7 STEP: Creating secret with name s-test-opt-upd-7b587727-a95c-4de0-9f01-48a7eae031c8 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-00679051-3411-4e18-ae2b-797831dc71b7 STEP: Updating secret s-test-opt-upd-7b587727-a95c-4de0-9f01-48a7eae031c8 STEP: Creating secret with name s-test-opt-create-e465bdf2-f2d0-4d1a-82d9-bda39027785c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:05:08.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4081" for this suite. • [SLOW TEST:96.795 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":118,"skipped":1912,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:05:08.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 11 00:05:08.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6287' Aug 11 00:05:09.132: INFO: stderr: "" Aug 11 00:05:09.132: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Aug 11 00:05:14.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-6287 -o json' Aug 11 00:05:14.380: INFO: stderr: "" Aug 11 00:05:14.380: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-11T00:05:09Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-11T00:05:09Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.55\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-11T00:05:12Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6287\",\n \"resourceVersion\": \"6048624\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6287/pods/e2e-test-httpd-pod\",\n \"uid\": \"eeb418a7-867e-4529-9ca7-35c44d6bf72f\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-r79d4\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-r79d4\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-r79d4\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-11T00:05:09Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-11T00:05:12Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-11T00:05:12Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-11T00:05:09Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://f464d4203e87252870b344a5d40ba76e4a37e692a80a6bdc8f0d36675b5cd0fc\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-11T00:05:12Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.55\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.55\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-11T00:05:09Z\"\n }\n}\n" STEP: replace the image in the pod Aug 11 00:05:14.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6287' Aug 11 00:05:15.032: INFO: stderr: "" Aug 11 00:05:15.032: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1586 Aug 11 00:05:15.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6287' Aug 11 00:05:23.834: INFO: stderr: "" Aug 11 00:05:23.834: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:05:23.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6287" for this suite. • [SLOW TEST:14.955 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":303,"completed":119,"skipped":1947,"failed":0} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:05:23.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-9495 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 11 00:05:23.963: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 11 00:05:24.055: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 11 00:05:26.104: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 11 00:05:28.059: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 11 00:05:30.060: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 11 00:05:32.060: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 11 00:05:34.060: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 11 00:05:36.059: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 11 00:05:38.060: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 11 00:05:40.060: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 11 00:05:42.060: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 11 00:05:44.060: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 11 00:05:44.066: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 11 00:05:50.123: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.182 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9495 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:05:50.123: INFO: >>> kubeConfig: /root/.kube/config I0811 00:05:50.160192 7 log.go:181] (0xc00106a4d0) (0xc007147180) Create stream I0811 00:05:50.160222 7 log.go:181] (0xc00106a4d0) (0xc007147180) Stream added, broadcasting: 1 I0811 00:05:50.162430 7 log.go:181] (0xc00106a4d0) Reply frame received for 1 I0811 00:05:50.162508 7 log.go:181] (0xc00106a4d0) (0xc00292eaa0) Create stream I0811 00:05:50.162546 7 log.go:181] (0xc00106a4d0) (0xc00292eaa0) Stream added, broadcasting: 3 I0811 00:05:50.163807 7 log.go:181] (0xc00106a4d0) Reply frame received for 3 I0811 00:05:50.163864 7 log.go:181] (0xc00106a4d0) (0xc0027aa000) Create stream I0811 00:05:50.163882 7 log.go:181] (0xc00106a4d0) (0xc0027aa000) Stream added, broadcasting: 5 I0811 00:05:50.165244 7 log.go:181] (0xc00106a4d0) Reply frame received for 5 I0811 00:05:51.263223 7 log.go:181] (0xc00106a4d0) Data frame received for 5 I0811 00:05:51.263273 7 log.go:181] (0xc0027aa000) (5) Data frame handling I0811 00:05:51.263310 7 log.go:181] (0xc00106a4d0) Data frame received for 3 I0811 00:05:51.263334 7 log.go:181] (0xc00292eaa0) (3) Data frame handling I0811 00:05:51.263364 7 log.go:181] (0xc00292eaa0) (3) Data frame sent I0811 00:05:51.263385 7 log.go:181] (0xc00106a4d0) Data frame received for 3 I0811 00:05:51.263401 7 log.go:181] (0xc00292eaa0) (3) Data frame handling I0811 00:05:51.265550 7 log.go:181] (0xc00106a4d0) Data frame received for 1 I0811 00:05:51.265575 7 log.go:181] (0xc007147180) (1) Data frame handling I0811 00:05:51.265590 7 log.go:181] (0xc007147180) (1) Data frame sent I0811 00:05:51.265602 7 log.go:181] (0xc00106a4d0) (0xc007147180) Stream removed, broadcasting: 1 I0811 00:05:51.265614 7 log.go:181] (0xc00106a4d0) Go away received I0811 00:05:51.265763 7 log.go:181] (0xc00106a4d0) (0xc007147180) Stream removed, broadcasting: 1 I0811 00:05:51.265803 7 log.go:181] (0xc00106a4d0) (0xc00292eaa0) Stream removed, broadcasting: 3 I0811 00:05:51.265836 7 log.go:181] (0xc00106a4d0) (0xc0027aa000) Stream removed, broadcasting: 5 Aug 11 00:05:51.265: INFO: Found all expected endpoints: [netserver-0] Aug 11 00:05:51.269: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.56 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9495 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:05:51.269: INFO: >>> kubeConfig: /root/.kube/config I0811 00:05:51.305384 7 log.go:181] (0xc00106aa50) (0xc007147680) Create stream I0811 00:05:51.305429 7 log.go:181] (0xc00106aa50) (0xc007147680) Stream added, broadcasting: 1 I0811 00:05:51.308456 7 log.go:181] (0xc00106aa50) Reply frame received for 1 I0811 00:05:51.308490 7 log.go:181] (0xc00106aa50) (0xc00292eb40) Create stream I0811 00:05:51.308519 7 log.go:181] (0xc00106aa50) (0xc00292eb40) Stream added, broadcasting: 3 I0811 00:05:51.309655 7 log.go:181] (0xc00106aa50) Reply frame received for 3 I0811 00:05:51.309682 7 log.go:181] (0xc00106aa50) (0xc007147720) Create stream I0811 00:05:51.309697 7 log.go:181] (0xc00106aa50) (0xc007147720) Stream added, broadcasting: 5 I0811 00:05:51.310771 7 log.go:181] (0xc00106aa50) Reply frame received for 5 I0811 00:05:52.410015 7 log.go:181] (0xc00106aa50) Data frame received for 3 I0811 00:05:52.410058 7 log.go:181] (0xc00292eb40) (3) Data frame handling I0811 00:05:52.410093 7 log.go:181] (0xc00292eb40) (3) Data frame sent I0811 00:05:52.411883 7 log.go:181] (0xc00106aa50) Data frame received for 3 I0811 00:05:52.411919 7 log.go:181] (0xc00106aa50) Data frame received for 5 I0811 00:05:52.411989 7 log.go:181] (0xc007147720) (5) Data frame handling I0811 00:05:52.412052 7 log.go:181] (0xc00292eb40) (3) Data frame handling I0811 00:05:52.413790 7 log.go:181] (0xc00106aa50) Data frame received for 1 I0811 00:05:52.413832 7 log.go:181] (0xc007147680) (1) Data frame handling I0811 00:05:52.413874 7 log.go:181] (0xc007147680) (1) Data frame sent I0811 00:05:52.413899 7 log.go:181] (0xc00106aa50) (0xc007147680) Stream removed, broadcasting: 1 I0811 00:05:52.413932 7 log.go:181] (0xc00106aa50) Go away received I0811 00:05:52.414050 7 log.go:181] (0xc00106aa50) (0xc007147680) Stream removed, broadcasting: 1 I0811 00:05:52.414091 7 log.go:181] (0xc00106aa50) (0xc00292eb40) Stream removed, broadcasting: 3 I0811 00:05:52.414111 7 log.go:181] (0xc00106aa50) (0xc007147720) Stream removed, broadcasting: 5 Aug 11 00:05:52.414: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:05:52.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9495" for this suite. • [SLOW TEST:28.580 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":120,"skipped":1947,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:05:52.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:05:52.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4891' Aug 11 00:05:52.808: INFO: stderr: "" Aug 11 00:05:52.808: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Aug 11 00:05:52.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4891' Aug 11 00:05:53.110: INFO: stderr: "" Aug 11 00:05:53.110: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Aug 11 00:05:54.115: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 00:05:54.115: INFO: Found 0 / 1 Aug 11 00:05:55.114: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 00:05:55.114: INFO: Found 0 / 1 Aug 11 00:05:56.115: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 00:05:56.115: INFO: Found 1 / 1 Aug 11 00:05:56.115: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 11 00:05:56.118: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 00:05:56.118: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 11 00:05:56.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config describe pod agnhost-primary-m6n97 --namespace=kubectl-4891' Aug 11 00:05:56.232: INFO: stderr: "" Aug 11 00:05:56.232: INFO: stdout: "Name: agnhost-primary-m6n97\nNamespace: kubectl-4891\nPriority: 0\nNode: latest-worker2/172.18.0.12\nStart Time: Tue, 11 Aug 2020 00:05:52 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.58\nIPs:\n IP: 10.244.2.58\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://8031f28f3009dd8be20c5011d16cc943340ccd85970044945aba932632a7ab12\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 11 Aug 2020 00:05:55 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-72lwq (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-72lwq:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-72lwq\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s Successfully assigned kubectl-4891/agnhost-primary-m6n97 to latest-worker2\n Normal Pulled 2s kubelet, latest-worker2 Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 1s kubelet, latest-worker2 Created container agnhost-primary\n Normal Started 1s kubelet, latest-worker2 Started container agnhost-primary\n" Aug 11 00:05:56.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-4891' Aug 11 00:05:56.366: INFO: stderr: "" Aug 11 00:05:56.366: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4891\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-primary-m6n97\n" Aug 11 00:05:56.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-4891' Aug 11 00:05:56.477: INFO: stderr: "" Aug 11 00:05:56.478: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4891\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.101.237.186\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.58:6379\nSession Affinity: None\nEvents: \n" Aug 11 00:05:56.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config describe node latest-control-plane' Aug 11 00:05:56.630: INFO: stderr: "" Aug 11 00:05:56.630: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 19 Jul 2020 21:38:12 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Tue, 11 Aug 2020 00:05:56 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 11 Aug 2020 00:02:39 +0000 Sun, 19 Jul 2020 21:38:08 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 11 Aug 2020 00:02:39 +0000 Sun, 19 Jul 2020 21:38:08 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 11 Aug 2020 00:02:39 +0000 Sun, 19 Jul 2020 21:38:08 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 11 Aug 2020 00:02:39 +0000 Sun, 19 Jul 2020 21:39:43 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: e756079c6ff042fb9f9f1838b420a0a5\n System UUID: 397b219b-882b-4fb6-87c8-e536d116b866\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version: v1.19.0-rc.1\n Kube-Proxy Version: v1.19.0-rc.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (6 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22d\n kube-system kindnet-mg7cm 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 22d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 22d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 22d\n kube-system kube-proxy-gb68f 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 22d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Aug 11 00:05:56.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config describe namespace kubectl-4891' Aug 11 00:05:56.747: INFO: stderr: "" Aug 11 00:05:56.747: INFO: stdout: "Name: kubectl-4891\nLabels: e2e-framework=kubectl\n e2e-run=ecab459a-d7ed-4a36-96c1-e6f041d70e58\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:05:56.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4891" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":303,"completed":121,"skipped":1952,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:05:56.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 11 00:05:56.826: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 11 00:05:56.857: INFO: Waiting for terminating namespaces to be deleted... Aug 11 00:05:56.859: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 11 00:05:56.865: INFO: coredns-f9fd979d6-s745j from kube-system started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 11 00:05:56.865: INFO: Container coredns ready: true, restart count 0 Aug 11 00:05:56.865: INFO: coredns-f9fd979d6-zs4sj from kube-system started at 2020-07-19 21:39:36 +0000 UTC (1 container statuses recorded) Aug 11 00:05:56.865: INFO: Container coredns ready: true, restart count 0 Aug 11 00:05:56.865: INFO: kindnet-46dnt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 11 00:05:56.865: INFO: Container kindnet-cni ready: true, restart count 0 Aug 11 00:05:56.865: INFO: kube-proxy-sxpg9 from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 11 00:05:56.865: INFO: Container kube-proxy ready: true, restart count 0 Aug 11 00:05:56.865: INFO: local-path-provisioner-8b46957d4-2gzpd from local-path-storage started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 11 00:05:56.865: INFO: Container local-path-provisioner ready: true, restart count 0 Aug 11 00:05:56.865: INFO: netserver-0 from pod-network-test-9495 started at 2020-08-11 00:05:24 +0000 UTC (1 container statuses recorded) Aug 11 00:05:56.865: INFO: Container webserver ready: true, restart count 0 Aug 11 00:05:56.865: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 11 00:05:56.870: INFO: kindnet-g6zbt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 11 00:05:56.870: INFO: Container kindnet-cni ready: true, restart count 0 Aug 11 00:05:56.870: INFO: kube-proxy-nsnzn from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 11 00:05:56.870: INFO: Container kube-proxy ready: true, restart count 0 Aug 11 00:05:56.870: INFO: agnhost-primary-m6n97 from kubectl-4891 started at 2020-08-11 00:05:52 +0000 UTC (1 container statuses recorded) Aug 11 00:05:56.870: INFO: Container agnhost-primary ready: true, restart count 0 Aug 11 00:05:56.870: INFO: host-test-container-pod from pod-network-test-9495 started at 2020-08-11 00:05:44 +0000 UTC (1 container statuses recorded) Aug 11 00:05:56.870: INFO: Container agnhost ready: true, restart count 0 Aug 11 00:05:56.870: INFO: netserver-1 from pod-network-test-9495 started at 2020-08-11 00:05:24 +0000 UTC (1 container statuses recorded) Aug 11 00:05:56.870: INFO: Container webserver ready: true, restart count 0 Aug 11 00:05:56.870: INFO: test-container-pod from pod-network-test-9495 started at 2020-08-11 00:05:44 +0000 UTC (1 container statuses recorded) Aug 11 00:05:56.870: INFO: Container webserver ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4a1971c0-9a4b-4019-bb84-b35cb168c572 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-4a1971c0-9a4b-4019-bb84-b35cb168c572 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-4a1971c0-9a4b-4019-bb84-b35cb168c572 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:06:15.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-440" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:18.692 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":303,"completed":122,"skipped":1960,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:06:15.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Aug 11 00:06:15.521: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix528352320/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:06:15.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5490" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":303,"completed":123,"skipped":1985,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:06:15.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Aug 11 00:06:15.701: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6639' Aug 11 00:06:15.975: INFO: stderr: "" Aug 11 00:06:15.975: INFO: stdout: "pod/pause created\n" Aug 11 00:06:15.975: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Aug 11 00:06:15.975: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6639" to be "running and ready" Aug 11 00:06:15.980: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.755577ms Aug 11 00:06:18.051: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075993698s Aug 11 00:06:20.055: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.079463433s Aug 11 00:06:20.055: INFO: Pod "pause" satisfied condition "running and ready" Aug 11 00:06:20.055: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Aug 11 00:06:20.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6639' Aug 11 00:06:20.180: INFO: stderr: "" Aug 11 00:06:20.180: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Aug 11 00:06:20.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6639' Aug 11 00:06:20.298: INFO: stderr: "" Aug 11 00:06:20.298: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Aug 11 00:06:20.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6639' Aug 11 00:06:20.414: INFO: stderr: "" Aug 11 00:06:20.414: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Aug 11 00:06:20.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6639' Aug 11 00:06:20.525: INFO: stderr: "" Aug 11 00:06:20.525: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1340 STEP: using delete to clean up resources Aug 11 00:06:20.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6639' Aug 11 00:06:21.127: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 11 00:06:21.128: INFO: stdout: "pod \"pause\" force deleted\n" Aug 11 00:06:21.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6639' Aug 11 00:06:21.470: INFO: stderr: "No resources found in kubectl-6639 namespace.\n" Aug 11 00:06:21.470: INFO: stdout: "" Aug 11 00:06:21.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6639 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 11 00:06:21.853: INFO: stderr: "" Aug 11 00:06:21.853: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:06:21.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6639" for this suite. • [SLOW TEST:6.287 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1330 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":303,"completed":124,"skipped":2018,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:06:21.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-f37de980-2a94-4bd6-bb03-71f035d9f454 STEP: Creating a pod to test consume secrets Aug 11 00:06:23.215: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-806dee8a-c315-4387-bf8c-2353c8f10987" in namespace "projected-3443" to be "Succeeded or Failed" Aug 11 00:06:23.289: INFO: Pod "pod-projected-secrets-806dee8a-c315-4387-bf8c-2353c8f10987": Phase="Pending", Reason="", readiness=false. Elapsed: 74.65992ms Aug 11 00:06:25.591: INFO: Pod "pod-projected-secrets-806dee8a-c315-4387-bf8c-2353c8f10987": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37610843s Aug 11 00:06:27.595: INFO: Pod "pod-projected-secrets-806dee8a-c315-4387-bf8c-2353c8f10987": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.380247844s STEP: Saw pod success Aug 11 00:06:27.595: INFO: Pod "pod-projected-secrets-806dee8a-c315-4387-bf8c-2353c8f10987" satisfied condition "Succeeded or Failed" Aug 11 00:06:27.598: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-806dee8a-c315-4387-bf8c-2353c8f10987 container projected-secret-volume-test: STEP: delete the pod Aug 11 00:06:27.668: INFO: Waiting for pod pod-projected-secrets-806dee8a-c315-4387-bf8c-2353c8f10987 to disappear Aug 11 00:06:27.678: INFO: Pod pod-projected-secrets-806dee8a-c315-4387-bf8c-2353c8f10987 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:06:27.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3443" for this suite. • [SLOW TEST:5.784 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":125,"skipped":2020,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:06:27.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-3886 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3886 STEP: Deleting pre-stop pod Aug 11 00:06:40.839: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:06:40.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3886" for this suite. • [SLOW TEST:13.217 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":303,"completed":126,"skipped":2040,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:06:40.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-1eb3abe6-078d-4e14-a3df-9d489c7d8b15 STEP: Creating a pod to test consume configMaps Aug 11 00:06:40.974: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-53e95f15-7ae1-4ab9-adc0-fbf75df5a870" in namespace "projected-1572" to be "Succeeded or Failed" Aug 11 00:06:41.297: INFO: Pod "pod-projected-configmaps-53e95f15-7ae1-4ab9-adc0-fbf75df5a870": Phase="Pending", Reason="", readiness=false. Elapsed: 323.344764ms Aug 11 00:06:43.301: INFO: Pod "pod-projected-configmaps-53e95f15-7ae1-4ab9-adc0-fbf75df5a870": Phase="Pending", Reason="", readiness=false. Elapsed: 2.327154202s Aug 11 00:06:45.305: INFO: Pod "pod-projected-configmaps-53e95f15-7ae1-4ab9-adc0-fbf75df5a870": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.331519562s STEP: Saw pod success Aug 11 00:06:45.305: INFO: Pod "pod-projected-configmaps-53e95f15-7ae1-4ab9-adc0-fbf75df5a870" satisfied condition "Succeeded or Failed" Aug 11 00:06:45.308: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-53e95f15-7ae1-4ab9-adc0-fbf75df5a870 container projected-configmap-volume-test: STEP: delete the pod Aug 11 00:06:45.354: INFO: Waiting for pod pod-projected-configmaps-53e95f15-7ae1-4ab9-adc0-fbf75df5a870 to disappear Aug 11 00:06:45.403: INFO: Pod pod-projected-configmaps-53e95f15-7ae1-4ab9-adc0-fbf75df5a870 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:06:45.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1572" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":127,"skipped":2056,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:06:45.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Aug 11 00:06:45.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4527' Aug 11 00:06:45.730: INFO: stderr: "" Aug 11 00:06:45.730: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Aug 11 00:06:46.734: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 00:06:46.734: INFO: Found 0 / 1 Aug 11 00:06:47.735: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 00:06:47.735: INFO: Found 0 / 1 Aug 11 00:06:48.735: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 00:06:48.735: INFO: Found 0 / 1 Aug 11 00:06:49.735: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 00:06:49.735: INFO: Found 1 / 1 Aug 11 00:06:49.735: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 11 00:06:49.739: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 00:06:49.739: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 11 00:06:49.739: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config patch pod agnhost-primary-28rrh --namespace=kubectl-4527 -p {"metadata":{"annotations":{"x":"y"}}}' Aug 11 00:06:49.858: INFO: stderr: "" Aug 11 00:06:49.859: INFO: stdout: "pod/agnhost-primary-28rrh patched\n" STEP: checking annotations Aug 11 00:06:49.913: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 00:06:49.913: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:06:49.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4527" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":303,"completed":128,"skipped":2073,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:06:49.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-6e9009be-465e-4af6-85a2-f3fc1891e8e5 in namespace container-probe-5877 Aug 11 00:06:54.055: INFO: Started pod liveness-6e9009be-465e-4af6-85a2-f3fc1891e8e5 in namespace container-probe-5877 STEP: checking the pod's current state and verifying that restartCount is present Aug 11 00:06:54.057: INFO: Initial restart count of pod liveness-6e9009be-465e-4af6-85a2-f3fc1891e8e5 is 0 Aug 11 00:07:12.098: INFO: Restart count of pod container-probe-5877/liveness-6e9009be-465e-4af6-85a2-f3fc1891e8e5 is now 1 (18.040874397s elapsed) Aug 11 00:07:32.140: INFO: Restart count of pod container-probe-5877/liveness-6e9009be-465e-4af6-85a2-f3fc1891e8e5 is now 2 (38.082287699s elapsed) Aug 11 00:07:52.183: INFO: Restart count of pod container-probe-5877/liveness-6e9009be-465e-4af6-85a2-f3fc1891e8e5 is now 3 (58.125622157s elapsed) Aug 11 00:08:12.298: INFO: Restart count of pod container-probe-5877/liveness-6e9009be-465e-4af6-85a2-f3fc1891e8e5 is now 4 (1m18.240897635s elapsed) Aug 11 00:09:22.511: INFO: Restart count of pod container-probe-5877/liveness-6e9009be-465e-4af6-85a2-f3fc1891e8e5 is now 5 (2m28.453366559s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:09:22.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5877" for this suite. • [SLOW TEST:152.627 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":303,"completed":129,"skipped":2087,"failed":0} SSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:09:22.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Aug 11 00:09:23.642: INFO: starting watch STEP: patching STEP: updating Aug 11 00:09:23.695: INFO: waiting for watch events with expected annotations Aug 11 00:09:23.695: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:09:23.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-317" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":303,"completed":130,"skipped":2092,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:09:23.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-wxnz STEP: Creating a pod to test atomic-volume-subpath Aug 11 00:09:23.962: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-wxnz" in namespace "subpath-63" to be "Succeeded or Failed" Aug 11 00:09:23.966: INFO: Pod "pod-subpath-test-projected-wxnz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03448ms Aug 11 00:09:26.012: INFO: Pod "pod-subpath-test-projected-wxnz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049451795s Aug 11 00:09:28.016: INFO: Pod "pod-subpath-test-projected-wxnz": Phase="Running", Reason="", readiness=true. Elapsed: 4.054114768s Aug 11 00:09:30.029: INFO: Pod "pod-subpath-test-projected-wxnz": Phase="Running", Reason="", readiness=true. Elapsed: 6.067134228s Aug 11 00:09:32.034: INFO: Pod "pod-subpath-test-projected-wxnz": Phase="Running", Reason="", readiness=true. Elapsed: 8.072059865s Aug 11 00:09:34.038: INFO: Pod "pod-subpath-test-projected-wxnz": Phase="Running", Reason="", readiness=true. Elapsed: 10.075858562s Aug 11 00:09:36.042: INFO: Pod "pod-subpath-test-projected-wxnz": Phase="Running", Reason="", readiness=true. Elapsed: 12.079564389s Aug 11 00:09:38.045: INFO: Pod "pod-subpath-test-projected-wxnz": Phase="Running", Reason="", readiness=true. Elapsed: 14.083179312s Aug 11 00:09:40.048: INFO: Pod "pod-subpath-test-projected-wxnz": Phase="Running", Reason="", readiness=true. Elapsed: 16.085974648s Aug 11 00:09:42.059: INFO: Pod "pod-subpath-test-projected-wxnz": Phase="Running", Reason="", readiness=true. Elapsed: 18.097238401s Aug 11 00:09:44.064: INFO: Pod "pod-subpath-test-projected-wxnz": Phase="Running", Reason="", readiness=true. Elapsed: 20.1024158s Aug 11 00:09:46.069: INFO: Pod "pod-subpath-test-projected-wxnz": Phase="Running", Reason="", readiness=true. Elapsed: 22.106801473s Aug 11 00:09:48.073: INFO: Pod "pod-subpath-test-projected-wxnz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.111029157s STEP: Saw pod success Aug 11 00:09:48.073: INFO: Pod "pod-subpath-test-projected-wxnz" satisfied condition "Succeeded or Failed" Aug 11 00:09:48.076: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-wxnz container test-container-subpath-projected-wxnz: STEP: delete the pod Aug 11 00:09:48.120: INFO: Waiting for pod pod-subpath-test-projected-wxnz to disappear Aug 11 00:09:48.143: INFO: Pod pod-subpath-test-projected-wxnz no longer exists STEP: Deleting pod pod-subpath-test-projected-wxnz Aug 11 00:09:48.143: INFO: Deleting pod "pod-subpath-test-projected-wxnz" in namespace "subpath-63" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:09:48.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-63" for this suite. • [SLOW TEST:24.321 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":303,"completed":131,"skipped":2114,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:09:48.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Aug 11 00:09:48.259: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:10:04.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-315" for this suite. • [SLOW TEST:16.506 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":303,"completed":132,"skipped":2123,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:10:04.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 11 00:10:04.828: INFO: Waiting up to 5m0s for pod "downwardapi-volume-67d54134-5372-42b0-a73b-d34adec9e444" in namespace "downward-api-1616" to be "Succeeded or Failed" Aug 11 00:10:04.854: INFO: Pod "downwardapi-volume-67d54134-5372-42b0-a73b-d34adec9e444": Phase="Pending", Reason="", readiness=false. Elapsed: 26.054965ms Aug 11 00:10:06.858: INFO: Pod "downwardapi-volume-67d54134-5372-42b0-a73b-d34adec9e444": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029688942s Aug 11 00:10:08.867: INFO: Pod "downwardapi-volume-67d54134-5372-42b0-a73b-d34adec9e444": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038417693s STEP: Saw pod success Aug 11 00:10:08.867: INFO: Pod "downwardapi-volume-67d54134-5372-42b0-a73b-d34adec9e444" satisfied condition "Succeeded or Failed" Aug 11 00:10:08.870: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-67d54134-5372-42b0-a73b-d34adec9e444 container client-container: STEP: delete the pod Aug 11 00:10:08.887: INFO: Waiting for pod downwardapi-volume-67d54134-5372-42b0-a73b-d34adec9e444 to disappear Aug 11 00:10:08.892: INFO: Pod downwardapi-volume-67d54134-5372-42b0-a73b-d34adec9e444 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:10:08.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1616" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":133,"skipped":2170,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:10:08.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-gbv7 STEP: Creating a pod to test atomic-volume-subpath Aug 11 00:10:09.007: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-gbv7" in namespace "subpath-8523" to be "Succeeded or Failed" Aug 11 00:10:09.024: INFO: Pod "pod-subpath-test-secret-gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.446875ms Aug 11 00:10:11.047: INFO: Pod "pod-subpath-test-secret-gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04046925s Aug 11 00:10:13.052: INFO: Pod "pod-subpath-test-secret-gbv7": Phase="Running", Reason="", readiness=true. Elapsed: 4.044618534s Aug 11 00:10:15.054: INFO: Pod "pod-subpath-test-secret-gbv7": Phase="Running", Reason="", readiness=true. Elapsed: 6.047472007s Aug 11 00:10:17.058: INFO: Pod "pod-subpath-test-secret-gbv7": Phase="Running", Reason="", readiness=true. Elapsed: 8.051489011s Aug 11 00:10:19.062: INFO: Pod "pod-subpath-test-secret-gbv7": Phase="Running", Reason="", readiness=true. Elapsed: 10.055384426s Aug 11 00:10:21.066: INFO: Pod "pod-subpath-test-secret-gbv7": Phase="Running", Reason="", readiness=true. Elapsed: 12.05864541s Aug 11 00:10:23.070: INFO: Pod "pod-subpath-test-secret-gbv7": Phase="Running", Reason="", readiness=true. Elapsed: 14.06305914s Aug 11 00:10:25.074: INFO: Pod "pod-subpath-test-secret-gbv7": Phase="Running", Reason="", readiness=true. Elapsed: 16.067390778s Aug 11 00:10:27.078: INFO: Pod "pod-subpath-test-secret-gbv7": Phase="Running", Reason="", readiness=true. Elapsed: 18.071462088s Aug 11 00:10:29.082: INFO: Pod "pod-subpath-test-secret-gbv7": Phase="Running", Reason="", readiness=true. Elapsed: 20.07513424s Aug 11 00:10:31.086: INFO: Pod "pod-subpath-test-secret-gbv7": Phase="Running", Reason="", readiness=true. Elapsed: 22.079452363s Aug 11 00:10:33.091: INFO: Pod "pod-subpath-test-secret-gbv7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.083835113s STEP: Saw pod success Aug 11 00:10:33.091: INFO: Pod "pod-subpath-test-secret-gbv7" satisfied condition "Succeeded or Failed" Aug 11 00:10:33.096: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-gbv7 container test-container-subpath-secret-gbv7: STEP: delete the pod Aug 11 00:10:33.130: INFO: Waiting for pod pod-subpath-test-secret-gbv7 to disappear Aug 11 00:10:33.142: INFO: Pod pod-subpath-test-secret-gbv7 no longer exists STEP: Deleting pod pod-subpath-test-secret-gbv7 Aug 11 00:10:33.142: INFO: Deleting pod "pod-subpath-test-secret-gbv7" in namespace "subpath-8523" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:10:33.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8523" for this suite. • [SLOW TEST:24.220 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":303,"completed":134,"skipped":2185,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:10:33.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 11 00:10:33.257: INFO: Waiting up to 5m0s for pod "pod-54067e14-e649-4721-b842-57e661cd1195" in namespace "emptydir-602" to be "Succeeded or Failed" Aug 11 00:10:33.268: INFO: Pod "pod-54067e14-e649-4721-b842-57e661cd1195": Phase="Pending", Reason="", readiness=false. Elapsed: 11.683267ms Aug 11 00:10:35.272: INFO: Pod "pod-54067e14-e649-4721-b842-57e661cd1195": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015699824s Aug 11 00:10:37.277: INFO: Pod "pod-54067e14-e649-4721-b842-57e661cd1195": Phase="Running", Reason="", readiness=true. Elapsed: 4.019847323s Aug 11 00:10:39.281: INFO: Pod "pod-54067e14-e649-4721-b842-57e661cd1195": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024217654s STEP: Saw pod success Aug 11 00:10:39.281: INFO: Pod "pod-54067e14-e649-4721-b842-57e661cd1195" satisfied condition "Succeeded or Failed" Aug 11 00:10:39.284: INFO: Trying to get logs from node latest-worker2 pod pod-54067e14-e649-4721-b842-57e661cd1195 container test-container: STEP: delete the pod Aug 11 00:10:39.316: INFO: Waiting for pod pod-54067e14-e649-4721-b842-57e661cd1195 to disappear Aug 11 00:10:39.329: INFO: Pod pod-54067e14-e649-4721-b842-57e661cd1195 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:10:39.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-602" for this suite. • [SLOW TEST:6.187 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":135,"skipped":2188,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:10:39.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Aug 11 00:10:39.433: INFO: Waiting up to 5m0s for pod "var-expansion-718a39c1-4805-4f8f-8b77-6d8c60b41694" in namespace "var-expansion-1082" to be "Succeeded or Failed" Aug 11 00:10:39.437: INFO: Pod "var-expansion-718a39c1-4805-4f8f-8b77-6d8c60b41694": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10096ms Aug 11 00:10:41.497: INFO: Pod "var-expansion-718a39c1-4805-4f8f-8b77-6d8c60b41694": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063748577s Aug 11 00:10:43.612: INFO: Pod "var-expansion-718a39c1-4805-4f8f-8b77-6d8c60b41694": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.178401076s STEP: Saw pod success Aug 11 00:10:43.612: INFO: Pod "var-expansion-718a39c1-4805-4f8f-8b77-6d8c60b41694" satisfied condition "Succeeded or Failed" Aug 11 00:10:43.615: INFO: Trying to get logs from node latest-worker2 pod var-expansion-718a39c1-4805-4f8f-8b77-6d8c60b41694 container dapi-container: STEP: delete the pod Aug 11 00:10:43.659: INFO: Waiting for pod var-expansion-718a39c1-4805-4f8f-8b77-6d8c60b41694 to disappear Aug 11 00:10:43.707: INFO: Pod var-expansion-718a39c1-4805-4f8f-8b77-6d8c60b41694 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:10:43.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1082" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":303,"completed":136,"skipped":2206,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:10:43.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:10:43.844: INFO: Create a RollingUpdate DaemonSet Aug 11 00:10:43.847: INFO: Check that daemon pods launch on every node of the cluster Aug 11 00:10:43.851: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:10:43.862: INFO: Number of nodes with available pods: 0 Aug 11 00:10:43.862: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:10:44.866: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:10:44.870: INFO: Number of nodes with available pods: 0 Aug 11 00:10:44.870: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:10:45.906: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:10:45.911: INFO: Number of nodes with available pods: 0 Aug 11 00:10:45.911: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:10:46.959: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:10:46.962: INFO: Number of nodes with available pods: 0 Aug 11 00:10:46.962: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:10:47.869: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:10:47.876: INFO: Number of nodes with available pods: 1 Aug 11 00:10:47.876: INFO: Node latest-worker2 is running more than one daemon pod Aug 11 00:10:48.875: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:10:48.886: INFO: Number of nodes with available pods: 2 Aug 11 00:10:48.886: INFO: Number of running nodes: 2, number of available pods: 2 Aug 11 00:10:48.886: INFO: Update the DaemonSet to trigger a rollout Aug 11 00:10:48.893: INFO: Updating DaemonSet daemon-set Aug 11 00:11:03.910: INFO: Roll back the DaemonSet before rollout is complete Aug 11 00:11:03.918: INFO: Updating DaemonSet daemon-set Aug 11 00:11:03.918: INFO: Make sure DaemonSet rollback is complete Aug 11 00:11:03.928: INFO: Wrong image for pod: daemon-set-f2g5t. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 11 00:11:03.928: INFO: Pod daemon-set-f2g5t is not available Aug 11 00:11:03.946: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:11:04.951: INFO: Wrong image for pod: daemon-set-f2g5t. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 11 00:11:04.951: INFO: Pod daemon-set-f2g5t is not available Aug 11 00:11:04.955: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:11:05.996: INFO: Wrong image for pod: daemon-set-f2g5t. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 11 00:11:05.996: INFO: Pod daemon-set-f2g5t is not available Aug 11 00:11:06.006: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:11:06.952: INFO: Pod daemon-set-nm777 is not available Aug 11 00:11:06.957: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4207, will wait for the garbage collector to delete the pods Aug 11 00:11:07.021: INFO: Deleting DaemonSet.extensions daemon-set took: 6.957717ms Aug 11 00:11:07.421: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.292302ms Aug 11 00:11:10.625: INFO: Number of nodes with available pods: 0 Aug 11 00:11:10.625: INFO: Number of running nodes: 0, number of available pods: 0 Aug 11 00:11:10.627: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4207/daemonsets","resourceVersion":"6050357"},"items":null} Aug 11 00:11:10.629: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4207/pods","resourceVersion":"6050357"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:11:10.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4207" for this suite. • [SLOW TEST:26.877 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":303,"completed":137,"skipped":2228,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:11:10.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:11:10.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-846" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":303,"completed":138,"skipped":2267,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:11:10.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Aug 11 00:11:11.035: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7672 /api/v1/namespaces/watch-7672/configmaps/e2e-watch-test-resource-version 8acba098-8d03-43e3-aa07-728bb51f4257 6050375 0 2020-08-11 00:11:10 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-08-11 00:11:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 11 00:11:11.035: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7672 /api/v1/namespaces/watch-7672/configmaps/e2e-watch-test-resource-version 8acba098-8d03-43e3-aa07-728bb51f4257 6050376 0 2020-08-11 00:11:10 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-08-11 00:11:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:11:11.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7672" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":303,"completed":139,"skipped":2283,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:11:11.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-88298b12-f888-4d5c-bd00-56a696b94e7f STEP: Creating a pod to test consume configMaps Aug 11 00:11:11.111: INFO: Waiting up to 5m0s for pod "pod-configmaps-25c6a86d-c0c7-4525-8741-bcab72a03334" in namespace "configmap-3502" to be "Succeeded or Failed" Aug 11 00:11:11.115: INFO: Pod "pod-configmaps-25c6a86d-c0c7-4525-8741-bcab72a03334": Phase="Pending", Reason="", readiness=false. Elapsed: 3.931806ms Aug 11 00:11:13.123: INFO: Pod "pod-configmaps-25c6a86d-c0c7-4525-8741-bcab72a03334": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011560729s Aug 11 00:11:15.127: INFO: Pod "pod-configmaps-25c6a86d-c0c7-4525-8741-bcab72a03334": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016178877s STEP: Saw pod success Aug 11 00:11:15.127: INFO: Pod "pod-configmaps-25c6a86d-c0c7-4525-8741-bcab72a03334" satisfied condition "Succeeded or Failed" Aug 11 00:11:15.131: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-25c6a86d-c0c7-4525-8741-bcab72a03334 container configmap-volume-test: STEP: delete the pod Aug 11 00:11:15.188: INFO: Waiting for pod pod-configmaps-25c6a86d-c0c7-4525-8741-bcab72a03334 to disappear Aug 11 00:11:15.199: INFO: Pod pod-configmaps-25c6a86d-c0c7-4525-8741-bcab72a03334 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:11:15.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3502" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":140,"skipped":2285,"failed":0} S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:11:15.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:11:21.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4800" for this suite. • [SLOW TEST:6.266 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":303,"completed":141,"skipped":2286,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:11:21.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 11 00:11:22.608: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 11 00:11:24.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701482, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701482, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701482, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701482, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 11 00:11:27.739: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:11:27.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7305-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:11:28.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7891" for this suite. STEP: Destroying namespace "webhook-7891-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.548 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":303,"completed":142,"skipped":2294,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:11:29.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Aug 11 00:11:33.614: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-9527 PodName:var-expansion-60305b3b-ccba-4d40-9a16-a9d6cf5ae43b ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:11:33.614: INFO: >>> kubeConfig: /root/.kube/config I0811 00:11:33.649491 7 log.go:181] (0xc000124b00) (0xc001be2d20) Create stream I0811 00:11:33.649524 7 log.go:181] (0xc000124b00) (0xc001be2d20) Stream added, broadcasting: 1 I0811 00:11:33.651718 7 log.go:181] (0xc000124b00) Reply frame received for 1 I0811 00:11:33.651771 7 log.go:181] (0xc000124b00) (0xc0005a9680) Create stream I0811 00:11:33.651796 7 log.go:181] (0xc000124b00) (0xc0005a9680) Stream added, broadcasting: 3 I0811 00:11:33.652955 7 log.go:181] (0xc000124b00) Reply frame received for 3 I0811 00:11:33.652997 7 log.go:181] (0xc000124b00) (0xc000e386e0) Create stream I0811 00:11:33.653012 7 log.go:181] (0xc000124b00) (0xc000e386e0) Stream added, broadcasting: 5 I0811 00:11:33.654084 7 log.go:181] (0xc000124b00) Reply frame received for 5 I0811 00:11:33.724832 7 log.go:181] (0xc000124b00) Data frame received for 5 I0811 00:11:33.724875 7 log.go:181] (0xc000e386e0) (5) Data frame handling I0811 00:11:33.724897 7 log.go:181] (0xc000124b00) Data frame received for 3 I0811 00:11:33.724914 7 log.go:181] (0xc0005a9680) (3) Data frame handling I0811 00:11:33.726252 7 log.go:181] (0xc000124b00) Data frame received for 1 I0811 00:11:33.726280 7 log.go:181] (0xc001be2d20) (1) Data frame handling I0811 00:11:33.726303 7 log.go:181] (0xc001be2d20) (1) Data frame sent I0811 00:11:33.726319 7 log.go:181] (0xc000124b00) (0xc001be2d20) Stream removed, broadcasting: 1 I0811 00:11:33.726339 7 log.go:181] (0xc000124b00) Go away received I0811 00:11:33.726451 7 log.go:181] (0xc000124b00) (0xc001be2d20) Stream removed, broadcasting: 1 I0811 00:11:33.726475 7 log.go:181] (0xc000124b00) (0xc0005a9680) Stream removed, broadcasting: 3 I0811 00:11:33.726493 7 log.go:181] (0xc000124b00) (0xc000e386e0) Stream removed, broadcasting: 5 STEP: test for file in mounted path Aug 11 00:11:33.730: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-9527 PodName:var-expansion-60305b3b-ccba-4d40-9a16-a9d6cf5ae43b ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:11:33.730: INFO: >>> kubeConfig: /root/.kube/config I0811 00:11:33.761775 7 log.go:181] (0xc001742c60) (0xc000e38d20) Create stream I0811 00:11:33.761803 7 log.go:181] (0xc001742c60) (0xc000e38d20) Stream added, broadcasting: 1 I0811 00:11:33.763800 7 log.go:181] (0xc001742c60) Reply frame received for 1 I0811 00:11:33.763856 7 log.go:181] (0xc001742c60) (0xc001eb6b40) Create stream I0811 00:11:33.763876 7 log.go:181] (0xc001742c60) (0xc001eb6b40) Stream added, broadcasting: 3 I0811 00:11:33.765071 7 log.go:181] (0xc001742c60) Reply frame received for 3 I0811 00:11:33.765114 7 log.go:181] (0xc001742c60) (0xc001eb6be0) Create stream I0811 00:11:33.765139 7 log.go:181] (0xc001742c60) (0xc001eb6be0) Stream added, broadcasting: 5 I0811 00:11:33.766069 7 log.go:181] (0xc001742c60) Reply frame received for 5 I0811 00:11:33.824626 7 log.go:181] (0xc001742c60) Data frame received for 5 I0811 00:11:33.824688 7 log.go:181] (0xc001eb6be0) (5) Data frame handling I0811 00:11:33.824849 7 log.go:181] (0xc001742c60) Data frame received for 3 I0811 00:11:33.824891 7 log.go:181] (0xc001eb6b40) (3) Data frame handling I0811 00:11:33.826470 7 log.go:181] (0xc001742c60) Data frame received for 1 I0811 00:11:33.826491 7 log.go:181] (0xc000e38d20) (1) Data frame handling I0811 00:11:33.826512 7 log.go:181] (0xc000e38d20) (1) Data frame sent I0811 00:11:33.826526 7 log.go:181] (0xc001742c60) (0xc000e38d20) Stream removed, broadcasting: 1 I0811 00:11:33.826546 7 log.go:181] (0xc001742c60) Go away received I0811 00:11:33.826733 7 log.go:181] (0xc001742c60) (0xc000e38d20) Stream removed, broadcasting: 1 I0811 00:11:33.826772 7 log.go:181] (0xc001742c60) (0xc001eb6b40) Stream removed, broadcasting: 3 I0811 00:11:33.826790 7 log.go:181] (0xc001742c60) (0xc001eb6be0) Stream removed, broadcasting: 5 STEP: updating the annotation value Aug 11 00:11:34.347: INFO: Successfully updated pod "var-expansion-60305b3b-ccba-4d40-9a16-a9d6cf5ae43b" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Aug 11 00:11:34.371: INFO: Deleting pod "var-expansion-60305b3b-ccba-4d40-9a16-a9d6cf5ae43b" in namespace "var-expansion-9527" Aug 11 00:11:34.375: INFO: Wait up to 5m0s for pod "var-expansion-60305b3b-ccba-4d40-9a16-a9d6cf5ae43b" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:12:14.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9527" for this suite. • [SLOW TEST:45.427 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":303,"completed":143,"skipped":2326,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:12:14.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-1e148a1b-79c3-4440-8043-027597d01796 STEP: Creating a pod to test consume secrets Aug 11 00:12:14.506: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e2b3c252-413f-447c-994c-0add0db24be2" in namespace "projected-3844" to be "Succeeded or Failed" Aug 11 00:12:14.530: INFO: Pod "pod-projected-secrets-e2b3c252-413f-447c-994c-0add0db24be2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.881572ms Aug 11 00:12:16.534: INFO: Pod "pod-projected-secrets-e2b3c252-413f-447c-994c-0add0db24be2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027981988s Aug 11 00:12:18.543: INFO: Pod "pod-projected-secrets-e2b3c252-413f-447c-994c-0add0db24be2": Phase="Running", Reason="", readiness=true. Elapsed: 4.036319918s Aug 11 00:12:20.570: INFO: Pod "pod-projected-secrets-e2b3c252-413f-447c-994c-0add0db24be2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064095393s STEP: Saw pod success Aug 11 00:12:20.571: INFO: Pod "pod-projected-secrets-e2b3c252-413f-447c-994c-0add0db24be2" satisfied condition "Succeeded or Failed" Aug 11 00:12:20.573: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-e2b3c252-413f-447c-994c-0add0db24be2 container projected-secret-volume-test: STEP: delete the pod Aug 11 00:12:20.631: INFO: Waiting for pod pod-projected-secrets-e2b3c252-413f-447c-994c-0add0db24be2 to disappear Aug 11 00:12:20.657: INFO: Pod pod-projected-secrets-e2b3c252-413f-447c-994c-0add0db24be2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:12:20.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3844" for this suite. • [SLOW TEST:6.217 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":144,"skipped":2344,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:12:20.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3686 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-3686 Aug 11 00:12:20.840: INFO: Found 0 stateful pods, waiting for 1 Aug 11 00:12:30.845: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 11 00:12:30.858: INFO: Deleting all statefulset in ns statefulset-3686 Aug 11 00:12:30.861: INFO: Scaling statefulset ss to 0 Aug 11 00:12:51.060: INFO: Waiting for statefulset status.replicas updated to 0 Aug 11 00:12:51.064: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:12:51.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3686" for this suite. • [SLOW TEST:30.419 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":303,"completed":145,"skipped":2353,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:12:51.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 11 00:12:51.155: INFO: Waiting up to 5m0s for pod "pod-28dbe4e3-7fa1-4a29-a796-d4262669bca9" in namespace "emptydir-8380" to be "Succeeded or Failed" Aug 11 00:12:51.167: INFO: Pod "pod-28dbe4e3-7fa1-4a29-a796-d4262669bca9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.516597ms Aug 11 00:12:53.172: INFO: Pod "pod-28dbe4e3-7fa1-4a29-a796-d4262669bca9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016754253s Aug 11 00:12:55.176: INFO: Pod "pod-28dbe4e3-7fa1-4a29-a796-d4262669bca9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020778242s STEP: Saw pod success Aug 11 00:12:55.176: INFO: Pod "pod-28dbe4e3-7fa1-4a29-a796-d4262669bca9" satisfied condition "Succeeded or Failed" Aug 11 00:12:55.179: INFO: Trying to get logs from node latest-worker2 pod pod-28dbe4e3-7fa1-4a29-a796-d4262669bca9 container test-container: STEP: delete the pod Aug 11 00:12:55.261: INFO: Waiting for pod pod-28dbe4e3-7fa1-4a29-a796-d4262669bca9 to disappear Aug 11 00:12:55.273: INFO: Pod pod-28dbe4e3-7fa1-4a29-a796-d4262669bca9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:12:55.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8380" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":146,"skipped":2380,"failed":0} ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:12:55.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:12:55.350: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Aug 11 00:12:55.394: INFO: Pod name sample-pod: Found 0 pods out of 1 Aug 11 00:13:00.440: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 11 00:13:00.440: INFO: Creating deployment "test-rolling-update-deployment" Aug 11 00:13:00.448: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Aug 11 00:13:00.460: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Aug 11 00:13:02.467: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Aug 11 00:13:02.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701580, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701580, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701580, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701580, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 11 00:13:04.472: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 11 00:13:04.479: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4851 /apis/apps/v1/namespaces/deployment-4851/deployments/test-rolling-update-deployment b9f3047e-5879-41a7-b211-64f4fd706699 6051086 1 2020-08-11 00:13:00 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-08-11 00:13:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-11 00:13:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0034c74a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-11 00:13:00 +0000 UTC,LastTransitionTime:2020-08-11 00:13:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2020-08-11 00:13:03 +0000 UTC,LastTransitionTime:2020-08-11 00:13:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 11 00:13:04.482: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-4851 /apis/apps/v1/namespaces/deployment-4851/replicasets/test-rolling-update-deployment-c4cb8d6d9 34881b56-4ad7-4d63-9648-d5781e50cc61 6051075 1 2020-08-11 00:13:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment b9f3047e-5879-41a7-b211-64f4fd706699 0xc0034c7a20 0xc0034c7a21}] [] [{kube-controller-manager Update apps/v1 2020-08-11 00:13:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9f3047e-5879-41a7-b211-64f4fd706699\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0034c7a98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 11 00:13:04.482: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Aug 11 00:13:04.482: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4851 /apis/apps/v1/namespaces/deployment-4851/replicasets/test-rolling-update-controller 39770894-abaa-4fea-b7d4-104331f8f264 6051085 2 2020-08-11 00:12:55 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment b9f3047e-5879-41a7-b211-64f4fd706699 0xc0034c7917 0xc0034c7918}] [] [{e2e.test Update apps/v1 2020-08-11 00:12:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-11 00:13:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9f3047e-5879-41a7-b211-64f4fd706699\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0034c79b8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 11 00:13:04.486: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-6dpwp" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-6dpwp test-rolling-update-deployment-c4cb8d6d9- deployment-4851 /api/v1/namespaces/deployment-4851/pods/test-rolling-update-deployment-c4cb8d6d9-6dpwp eee30b14-15ca-4be5-bd18-48827af28f41 6051074 0 2020-08-11 00:13:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 34881b56-4ad7-4d63-9648-d5781e50cc61 0xc002a0cfa0 0xc002a0cfa1}] [] [{kube-controller-manager Update v1 2020-08-11 00:13:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"34881b56-4ad7-4d63-9648-d5781e50cc61\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:13:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.80\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-92fpg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-92fpg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-92fpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:13:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:13:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:13:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:13:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.80,StartTime:2020-08-11 00:13:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 00:13:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://d1a051ea63d61c4b4d60ba8008eb403bdb1117434c59b2348dea2c103e4a2eca,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:13:04.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4851" for this suite. • [SLOW TEST:9.342 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":147,"skipped":2380,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:13:04.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 11 00:13:04.814: INFO: Waiting up to 5m0s for pod "downward-api-19fabc1f-157b-4314-8462-4233f7ac77b4" in namespace "downward-api-9623" to be "Succeeded or Failed" Aug 11 00:13:04.901: INFO: Pod "downward-api-19fabc1f-157b-4314-8462-4233f7ac77b4": Phase="Pending", Reason="", readiness=false. Elapsed: 86.261563ms Aug 11 00:13:06.954: INFO: Pod "downward-api-19fabc1f-157b-4314-8462-4233f7ac77b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139968532s Aug 11 00:13:08.959: INFO: Pod "downward-api-19fabc1f-157b-4314-8462-4233f7ac77b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.144046315s STEP: Saw pod success Aug 11 00:13:08.959: INFO: Pod "downward-api-19fabc1f-157b-4314-8462-4233f7ac77b4" satisfied condition "Succeeded or Failed" Aug 11 00:13:08.961: INFO: Trying to get logs from node latest-worker2 pod downward-api-19fabc1f-157b-4314-8462-4233f7ac77b4 container dapi-container: STEP: delete the pod Aug 11 00:13:08.982: INFO: Waiting for pod downward-api-19fabc1f-157b-4314-8462-4233f7ac77b4 to disappear Aug 11 00:13:08.986: INFO: Pod downward-api-19fabc1f-157b-4314-8462-4233f7ac77b4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:13:08.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9623" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":303,"completed":148,"skipped":2389,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:13:08.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2404.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2404.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2404.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2404.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2404.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2404.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 11 00:13:15.198: INFO: DNS probes using dns-2404/dns-test-47160105-4ecf-48ff-95da-d1bc8b393462 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:13:15.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2404" for this suite. • [SLOW TEST:6.640 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":303,"completed":149,"skipped":2393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:13:15.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Aug 11 00:13:15.905: INFO: Created pod &Pod{ObjectMeta:{dns-6581 dns-6581 /api/v1/namespaces/dns-6581/pods/dns-6581 de33d87a-53aa-4413-9bbc-777bad8c7d03 6051212 0 2020-08-11 00:13:15 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-08-11 00:13:15 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9blzx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9blzx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9blzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:13:15.920: INFO: The status of Pod dns-6581 is Pending, waiting for it to be Running (with Ready = true) Aug 11 00:13:17.924: INFO: The status of Pod dns-6581 is Pending, waiting for it to be Running (with Ready = true) Aug 11 00:13:19.925: INFO: The status of Pod dns-6581 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Aug 11 00:13:19.925: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6581 PodName:dns-6581 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:13:19.925: INFO: >>> kubeConfig: /root/.kube/config I0811 00:13:19.965500 7 log.go:181] (0xc001743810) (0xc0005d2780) Create stream I0811 00:13:19.965530 7 log.go:181] (0xc001743810) (0xc0005d2780) Stream added, broadcasting: 1 I0811 00:13:19.971422 7 log.go:181] (0xc001743810) Reply frame received for 1 I0811 00:13:19.971481 7 log.go:181] (0xc001743810) (0xc002c62aa0) Create stream I0811 00:13:19.971503 7 log.go:181] (0xc001743810) (0xc002c62aa0) Stream added, broadcasting: 3 I0811 00:13:19.972970 7 log.go:181] (0xc001743810) Reply frame received for 3 I0811 00:13:19.973060 7 log.go:181] (0xc001743810) (0xc002c62b40) Create stream I0811 00:13:19.974417 7 log.go:181] (0xc001743810) (0xc002c62b40) Stream added, broadcasting: 5 I0811 00:13:19.975367 7 log.go:181] (0xc001743810) Reply frame received for 5 I0811 00:13:20.054580 7 log.go:181] (0xc001743810) Data frame received for 3 I0811 00:13:20.054619 7 log.go:181] (0xc002c62aa0) (3) Data frame handling I0811 00:13:20.054638 7 log.go:181] (0xc002c62aa0) (3) Data frame sent I0811 00:13:20.056299 7 log.go:181] (0xc001743810) Data frame received for 5 I0811 00:13:20.056342 7 log.go:181] (0xc002c62b40) (5) Data frame handling I0811 00:13:20.056380 7 log.go:181] (0xc001743810) Data frame received for 3 I0811 00:13:20.056420 7 log.go:181] (0xc002c62aa0) (3) Data frame handling I0811 00:13:20.058624 7 log.go:181] (0xc001743810) Data frame received for 1 I0811 00:13:20.058665 7 log.go:181] (0xc0005d2780) (1) Data frame handling I0811 00:13:20.058703 7 log.go:181] (0xc0005d2780) (1) Data frame sent I0811 00:13:20.058870 7 log.go:181] (0xc001743810) (0xc0005d2780) Stream removed, broadcasting: 1 I0811 00:13:20.058955 7 log.go:181] (0xc001743810) (0xc0005d2780) Stream removed, broadcasting: 1 I0811 00:13:20.058996 7 log.go:181] (0xc001743810) (0xc002c62aa0) Stream removed, broadcasting: 3 I0811 00:13:20.059018 7 log.go:181] (0xc001743810) (0xc002c62b40) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... I0811 00:13:20.059095 7 log.go:181] (0xc001743810) Go away received Aug 11 00:13:20.059: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6581 PodName:dns-6581 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:13:20.059: INFO: >>> kubeConfig: /root/.kube/config I0811 00:13:20.092833 7 log.go:181] (0xc002ff0a50) (0xc0024c5ae0) Create stream I0811 00:13:20.092852 7 log.go:181] (0xc002ff0a50) (0xc0024c5ae0) Stream added, broadcasting: 1 I0811 00:13:20.094685 7 log.go:181] (0xc002ff0a50) Reply frame received for 1 I0811 00:13:20.094744 7 log.go:181] (0xc002ff0a50) (0xc0005d2820) Create stream I0811 00:13:20.094780 7 log.go:181] (0xc002ff0a50) (0xc0005d2820) Stream added, broadcasting: 3 I0811 00:13:20.095734 7 log.go:181] (0xc002ff0a50) Reply frame received for 3 I0811 00:13:20.095770 7 log.go:181] (0xc002ff0a50) (0xc002c62be0) Create stream I0811 00:13:20.095786 7 log.go:181] (0xc002ff0a50) (0xc002c62be0) Stream added, broadcasting: 5 I0811 00:13:20.096893 7 log.go:181] (0xc002ff0a50) Reply frame received for 5 I0811 00:13:20.178868 7 log.go:181] (0xc002ff0a50) Data frame received for 3 I0811 00:13:20.178915 7 log.go:181] (0xc0005d2820) (3) Data frame handling I0811 00:13:20.178935 7 log.go:181] (0xc0005d2820) (3) Data frame sent I0811 00:13:20.180358 7 log.go:181] (0xc002ff0a50) Data frame received for 5 I0811 00:13:20.180386 7 log.go:181] (0xc002c62be0) (5) Data frame handling I0811 00:13:20.180613 7 log.go:181] (0xc002ff0a50) Data frame received for 3 I0811 00:13:20.180631 7 log.go:181] (0xc0005d2820) (3) Data frame handling I0811 00:13:20.182231 7 log.go:181] (0xc002ff0a50) Data frame received for 1 I0811 00:13:20.182261 7 log.go:181] (0xc0024c5ae0) (1) Data frame handling I0811 00:13:20.182278 7 log.go:181] (0xc0024c5ae0) (1) Data frame sent I0811 00:13:20.182293 7 log.go:181] (0xc002ff0a50) (0xc0024c5ae0) Stream removed, broadcasting: 1 I0811 00:13:20.182306 7 log.go:181] (0xc002ff0a50) Go away received I0811 00:13:20.182435 7 log.go:181] (0xc002ff0a50) (0xc0024c5ae0) Stream removed, broadcasting: 1 I0811 00:13:20.182458 7 log.go:181] (0xc002ff0a50) (0xc0005d2820) Stream removed, broadcasting: 3 I0811 00:13:20.182469 7 log.go:181] (0xc002ff0a50) (0xc002c62be0) Stream removed, broadcasting: 5 Aug 11 00:13:20.182: INFO: Deleting pod dns-6581... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:13:20.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6581" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":303,"completed":150,"skipped":2432,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:13:20.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-9b3e7c39-d361-46f9-8a58-baad5e2eec5c STEP: Creating a pod to test consume configMaps Aug 11 00:13:20.306: INFO: Waiting up to 5m0s for pod "pod-configmaps-b62af643-201d-4214-87d4-f5caed1c8377" in namespace "configmap-7641" to be "Succeeded or Failed" Aug 11 00:13:20.310: INFO: Pod "pod-configmaps-b62af643-201d-4214-87d4-f5caed1c8377": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218949ms Aug 11 00:13:22.323: INFO: Pod "pod-configmaps-b62af643-201d-4214-87d4-f5caed1c8377": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016994246s Aug 11 00:13:24.327: INFO: Pod "pod-configmaps-b62af643-201d-4214-87d4-f5caed1c8377": Phase="Running", Reason="", readiness=true. Elapsed: 4.021284167s Aug 11 00:13:26.332: INFO: Pod "pod-configmaps-b62af643-201d-4214-87d4-f5caed1c8377": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025595927s STEP: Saw pod success Aug 11 00:13:26.332: INFO: Pod "pod-configmaps-b62af643-201d-4214-87d4-f5caed1c8377" satisfied condition "Succeeded or Failed" Aug 11 00:13:26.335: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-b62af643-201d-4214-87d4-f5caed1c8377 container configmap-volume-test: STEP: delete the pod Aug 11 00:13:26.370: INFO: Waiting for pod pod-configmaps-b62af643-201d-4214-87d4-f5caed1c8377 to disappear Aug 11 00:13:26.378: INFO: Pod pod-configmaps-b62af643-201d-4214-87d4-f5caed1c8377 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:13:26.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7641" for this suite. • [SLOW TEST:6.146 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":151,"skipped":2483,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:13:26.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:13:31.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8145" for this suite. • [SLOW TEST:5.177 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":303,"completed":152,"skipped":2497,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:13:31.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Aug 11 00:13:31.634: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:13:43.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3215" for this suite. • [SLOW TEST:12.313 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":303,"completed":153,"skipped":2506,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:13:43.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Aug 11 00:13:43.997: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8629 /api/v1/namespaces/watch-8629/configmaps/e2e-watch-test-label-changed eb3ffcd1-3845-482f-a699-0ce83cf31421 6051409 0 2020-08-11 00:13:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-11 00:13:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 11 00:13:43.998: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8629 /api/v1/namespaces/watch-8629/configmaps/e2e-watch-test-label-changed eb3ffcd1-3845-482f-a699-0ce83cf31421 6051410 0 2020-08-11 00:13:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-11 00:13:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 11 00:13:43.998: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8629 /api/v1/namespaces/watch-8629/configmaps/e2e-watch-test-label-changed eb3ffcd1-3845-482f-a699-0ce83cf31421 6051411 0 2020-08-11 00:13:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-11 00:13:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Aug 11 00:13:54.075: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8629 /api/v1/namespaces/watch-8629/configmaps/e2e-watch-test-label-changed eb3ffcd1-3845-482f-a699-0ce83cf31421 6051451 0 2020-08-11 00:13:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-11 00:13:54 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 11 00:13:54.075: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8629 /api/v1/namespaces/watch-8629/configmaps/e2e-watch-test-label-changed eb3ffcd1-3845-482f-a699-0ce83cf31421 6051452 0 2020-08-11 00:13:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-11 00:13:54 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 11 00:13:54.075: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8629 /api/v1/namespaces/watch-8629/configmaps/e2e-watch-test-label-changed eb3ffcd1-3845-482f-a699-0ce83cf31421 6051453 0 2020-08-11 00:13:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-11 00:13:54 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:13:54.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8629" for this suite. • [SLOW TEST:10.253 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":303,"completed":154,"skipped":2554,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:13:54.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 11 00:13:54.184: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6081' Aug 11 00:13:57.319: INFO: stderr: "" Aug 11 00:13:57.319: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Aug 11 00:13:57.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6081' Aug 11 00:14:03.833: INFO: stderr: "" Aug 11 00:14:03.833: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:14:03.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6081" for this suite. • [SLOW TEST:9.726 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":303,"completed":155,"skipped":2565,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:14:03.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-d9cbd228-f1c3-4deb-94ac-6383cc700734 STEP: Creating a pod to test consume secrets Aug 11 00:14:04.003: INFO: Waiting up to 5m0s for pod "pod-secrets-122edf31-6059-43b0-bb27-ce440fca9a86" in namespace "secrets-3998" to be "Succeeded or Failed" Aug 11 00:14:04.006: INFO: Pod "pod-secrets-122edf31-6059-43b0-bb27-ce440fca9a86": Phase="Pending", Reason="", readiness=false. Elapsed: 3.182667ms Aug 11 00:14:06.010: INFO: Pod "pod-secrets-122edf31-6059-43b0-bb27-ce440fca9a86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007664435s Aug 11 00:14:08.014: INFO: Pod "pod-secrets-122edf31-6059-43b0-bb27-ce440fca9a86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01126071s STEP: Saw pod success Aug 11 00:14:08.014: INFO: Pod "pod-secrets-122edf31-6059-43b0-bb27-ce440fca9a86" satisfied condition "Succeeded or Failed" Aug 11 00:14:08.024: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-122edf31-6059-43b0-bb27-ce440fca9a86 container secret-volume-test: STEP: delete the pod Aug 11 00:14:08.203: INFO: Waiting for pod pod-secrets-122edf31-6059-43b0-bb27-ce440fca9a86 to disappear Aug 11 00:14:08.210: INFO: Pod pod-secrets-122edf31-6059-43b0-bb27-ce440fca9a86 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:14:08.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3998" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":156,"skipped":2600,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] server version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:14:08.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Aug 11 00:14:08.364: INFO: Major version: 1 STEP: Confirm minor version Aug 11 00:14:08.364: INFO: cleanMinorVersion: 19 Aug 11 00:14:08.364: INFO: Minor version: 19+ [AfterEach] [sig-api-machinery] server version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:14:08.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-5672" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":303,"completed":157,"skipped":2607,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:14:08.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 11 00:14:08.509: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c950cbf8-ac28-475b-8c4e-d7e9838a54aa" in namespace "downward-api-9864" to be "Succeeded or Failed" Aug 11 00:14:08.512: INFO: Pod "downwardapi-volume-c950cbf8-ac28-475b-8c4e-d7e9838a54aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.457496ms Aug 11 00:14:10.516: INFO: Pod "downwardapi-volume-c950cbf8-ac28-475b-8c4e-d7e9838a54aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006741935s Aug 11 00:14:12.520: INFO: Pod "downwardapi-volume-c950cbf8-ac28-475b-8c4e-d7e9838a54aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010822317s STEP: Saw pod success Aug 11 00:14:12.520: INFO: Pod "downwardapi-volume-c950cbf8-ac28-475b-8c4e-d7e9838a54aa" satisfied condition "Succeeded or Failed" Aug 11 00:14:12.523: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c950cbf8-ac28-475b-8c4e-d7e9838a54aa container client-container: STEP: delete the pod Aug 11 00:14:12.575: INFO: Waiting for pod downwardapi-volume-c950cbf8-ac28-475b-8c4e-d7e9838a54aa to disappear Aug 11 00:14:12.582: INFO: Pod downwardapi-volume-c950cbf8-ac28-475b-8c4e-d7e9838a54aa no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:14:12.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9864" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":303,"completed":158,"skipped":2608,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:14:12.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-1645/configmap-test-8acc381c-de29-40ab-b9fc-1840ee121be4 STEP: Creating a pod to test consume configMaps Aug 11 00:14:12.691: INFO: Waiting up to 5m0s for pod "pod-configmaps-f9ab1c00-17e5-4759-997e-073196a33a54" in namespace "configmap-1645" to be "Succeeded or Failed" Aug 11 00:14:12.696: INFO: Pod "pod-configmaps-f9ab1c00-17e5-4759-997e-073196a33a54": Phase="Pending", Reason="", readiness=false. Elapsed: 4.455307ms Aug 11 00:14:14.699: INFO: Pod "pod-configmaps-f9ab1c00-17e5-4759-997e-073196a33a54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007962576s Aug 11 00:14:16.703: INFO: Pod "pod-configmaps-f9ab1c00-17e5-4759-997e-073196a33a54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012207318s STEP: Saw pod success Aug 11 00:14:16.703: INFO: Pod "pod-configmaps-f9ab1c00-17e5-4759-997e-073196a33a54" satisfied condition "Succeeded or Failed" Aug 11 00:14:16.706: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-f9ab1c00-17e5-4759-997e-073196a33a54 container env-test: STEP: delete the pod Aug 11 00:14:16.759: INFO: Waiting for pod pod-configmaps-f9ab1c00-17e5-4759-997e-073196a33a54 to disappear Aug 11 00:14:16.767: INFO: Pod pod-configmaps-f9ab1c00-17e5-4759-997e-073196a33a54 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:14:16.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1645" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":159,"skipped":2639,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:14:16.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-75724965-32a5-44d1-a2d5-072994dc0ada STEP: Creating a pod to test consume secrets Aug 11 00:14:16.955: INFO: Waiting up to 5m0s for pod "pod-secrets-552cc4f3-38e6-4ccd-bdb8-e86a1d63d796" in namespace "secrets-6455" to be "Succeeded or Failed" Aug 11 00:14:16.959: INFO: Pod "pod-secrets-552cc4f3-38e6-4ccd-bdb8-e86a1d63d796": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199336ms Aug 11 00:14:19.063: INFO: Pod "pod-secrets-552cc4f3-38e6-4ccd-bdb8-e86a1d63d796": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107871719s Aug 11 00:14:21.066: INFO: Pod "pod-secrets-552cc4f3-38e6-4ccd-bdb8-e86a1d63d796": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111631892s STEP: Saw pod success Aug 11 00:14:21.066: INFO: Pod "pod-secrets-552cc4f3-38e6-4ccd-bdb8-e86a1d63d796" satisfied condition "Succeeded or Failed" Aug 11 00:14:21.069: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-552cc4f3-38e6-4ccd-bdb8-e86a1d63d796 container secret-volume-test: STEP: delete the pod Aug 11 00:14:21.115: INFO: Waiting for pod pod-secrets-552cc4f3-38e6-4ccd-bdb8-e86a1d63d796 to disappear Aug 11 00:14:21.158: INFO: Pod pod-secrets-552cc4f3-38e6-4ccd-bdb8-e86a1d63d796 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:14:21.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6455" for this suite. STEP: Destroying namespace "secret-namespace-7793" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":303,"completed":160,"skipped":2651,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:14:21.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2916 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2916 STEP: creating replication controller externalsvc in namespace services-2916 I0811 00:14:21.781193 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2916, replica count: 2 I0811 00:14:24.831689 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0811 00:14:27.831951 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Aug 11 00:14:27.956: INFO: Creating new exec pod Aug 11 00:14:31.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-2916 execpodggv4t -- /bin/sh -x -c nslookup nodeport-service.services-2916.svc.cluster.local' Aug 11 00:14:32.221: INFO: stderr: "I0811 00:14:32.103245 2327 log.go:181] (0xc000ea0d10) (0xc000e163c0) Create stream\nI0811 00:14:32.103305 2327 log.go:181] (0xc000ea0d10) (0xc000e163c0) Stream added, broadcasting: 1\nI0811 00:14:32.107679 2327 log.go:181] (0xc000ea0d10) Reply frame received for 1\nI0811 00:14:32.107751 2327 log.go:181] (0xc000ea0d10) (0xc000a325a0) Create stream\nI0811 00:14:32.107778 2327 log.go:181] (0xc000ea0d10) (0xc000a325a0) Stream added, broadcasting: 3\nI0811 00:14:32.108804 2327 log.go:181] (0xc000ea0d10) Reply frame received for 3\nI0811 00:14:32.108848 2327 log.go:181] (0xc000ea0d10) (0xc000a32aa0) Create stream\nI0811 00:14:32.108864 2327 log.go:181] (0xc000ea0d10) (0xc000a32aa0) Stream added, broadcasting: 5\nI0811 00:14:32.109935 2327 log.go:181] (0xc000ea0d10) Reply frame received for 5\nI0811 00:14:32.204484 2327 log.go:181] (0xc000ea0d10) Data frame received for 5\nI0811 00:14:32.204523 2327 log.go:181] (0xc000a32aa0) (5) Data frame handling\nI0811 00:14:32.204542 2327 log.go:181] (0xc000a32aa0) (5) Data frame sent\n+ nslookup nodeport-service.services-2916.svc.cluster.local\nI0811 00:14:32.210929 2327 log.go:181] (0xc000ea0d10) Data frame received for 3\nI0811 00:14:32.210958 2327 log.go:181] (0xc000a325a0) (3) Data frame handling\nI0811 00:14:32.210976 2327 log.go:181] (0xc000a325a0) (3) Data frame sent\nI0811 00:14:32.211448 2327 log.go:181] (0xc000ea0d10) Data frame received for 3\nI0811 00:14:32.211467 2327 log.go:181] (0xc000a325a0) (3) Data frame handling\nI0811 00:14:32.211483 2327 log.go:181] (0xc000a325a0) (3) Data frame sent\nI0811 00:14:32.211866 2327 log.go:181] (0xc000ea0d10) Data frame received for 5\nI0811 00:14:32.211883 2327 log.go:181] (0xc000a32aa0) (5) Data frame handling\nI0811 00:14:32.212094 2327 log.go:181] (0xc000ea0d10) Data frame received for 3\nI0811 00:14:32.212113 2327 log.go:181] (0xc000a325a0) (3) Data frame handling\nI0811 00:14:32.214637 2327 log.go:181] (0xc000ea0d10) Data frame received for 1\nI0811 00:14:32.214689 2327 log.go:181] (0xc000e163c0) (1) Data frame handling\nI0811 00:14:32.214737 2327 log.go:181] (0xc000e163c0) (1) Data frame sent\nI0811 00:14:32.214773 2327 log.go:181] (0xc000ea0d10) (0xc000e163c0) Stream removed, broadcasting: 1\nI0811 00:14:32.214809 2327 log.go:181] (0xc000ea0d10) Go away received\nI0811 00:14:32.215256 2327 log.go:181] (0xc000ea0d10) (0xc000e163c0) Stream removed, broadcasting: 1\nI0811 00:14:32.215277 2327 log.go:181] (0xc000ea0d10) (0xc000a325a0) Stream removed, broadcasting: 3\nI0811 00:14:32.215288 2327 log.go:181] (0xc000ea0d10) (0xc000a32aa0) Stream removed, broadcasting: 5\n" Aug 11 00:14:32.221: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2916.svc.cluster.local\tcanonical name = externalsvc.services-2916.svc.cluster.local.\nName:\texternalsvc.services-2916.svc.cluster.local\nAddress: 10.108.185.109\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2916, will wait for the garbage collector to delete the pods Aug 11 00:14:32.293: INFO: Deleting ReplicationController externalsvc took: 18.308929ms Aug 11 00:14:32.693: INFO: Terminating ReplicationController externalsvc pods took: 400.324347ms Aug 11 00:14:38.158: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:14:38.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2916" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:16.828 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":303,"completed":161,"skipped":2660,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:14:38.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 11 00:14:42.518: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:14:42.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7901" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":162,"skipped":2684,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:14:42.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 11 00:14:52.721: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4942 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:14:52.721: INFO: >>> kubeConfig: /root/.kube/config I0811 00:14:52.751215 7 log.go:181] (0xc002ff0840) (0xc00242ab40) Create stream I0811 00:14:52.751241 7 log.go:181] (0xc002ff0840) (0xc00242ab40) Stream added, broadcasting: 1 I0811 00:14:52.752988 7 log.go:181] (0xc002ff0840) Reply frame received for 1 I0811 00:14:52.753010 7 log.go:181] (0xc002ff0840) (0xc003724e60) Create stream I0811 00:14:52.753018 7 log.go:181] (0xc002ff0840) (0xc003724e60) Stream added, broadcasting: 3 I0811 00:14:52.753805 7 log.go:181] (0xc002ff0840) Reply frame received for 3 I0811 00:14:52.753890 7 log.go:181] (0xc002ff0840) (0xc002448a00) Create stream I0811 00:14:52.753914 7 log.go:181] (0xc002ff0840) (0xc002448a00) Stream added, broadcasting: 5 I0811 00:14:52.754789 7 log.go:181] (0xc002ff0840) Reply frame received for 5 I0811 00:14:52.802285 7 log.go:181] (0xc002ff0840) Data frame received for 3 I0811 00:14:52.802322 7 log.go:181] (0xc003724e60) (3) Data frame handling I0811 00:14:52.802336 7 log.go:181] (0xc003724e60) (3) Data frame sent I0811 00:14:52.802345 7 log.go:181] (0xc002ff0840) Data frame received for 3 I0811 00:14:52.802353 7 log.go:181] (0xc003724e60) (3) Data frame handling I0811 00:14:52.802409 7 log.go:181] (0xc002ff0840) Data frame received for 5 I0811 00:14:52.802464 7 log.go:181] (0xc002448a00) (5) Data frame handling I0811 00:14:52.803833 7 log.go:181] (0xc002ff0840) Data frame received for 1 I0811 00:14:52.803853 7 log.go:181] (0xc00242ab40) (1) Data frame handling I0811 00:14:52.803876 7 log.go:181] (0xc00242ab40) (1) Data frame sent I0811 00:14:52.803896 7 log.go:181] (0xc002ff0840) (0xc00242ab40) Stream removed, broadcasting: 1 I0811 00:14:52.803918 7 log.go:181] (0xc002ff0840) Go away received I0811 00:14:52.804038 7 log.go:181] (0xc002ff0840) (0xc00242ab40) Stream removed, broadcasting: 1 I0811 00:14:52.804056 7 log.go:181] (0xc002ff0840) (0xc003724e60) Stream removed, broadcasting: 3 I0811 00:14:52.804064 7 log.go:181] (0xc002ff0840) (0xc002448a00) Stream removed, broadcasting: 5 Aug 11 00:14:52.804: INFO: Exec stderr: "" Aug 11 00:14:52.804: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4942 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:14:52.804: INFO: >>> kubeConfig: /root/.kube/config I0811 00:14:52.838128 7 log.go:181] (0xc002ff0f20) (0xc00242adc0) Create stream I0811 00:14:52.838152 7 log.go:181] (0xc002ff0f20) (0xc00242adc0) Stream added, broadcasting: 1 I0811 00:14:52.840103 7 log.go:181] (0xc002ff0f20) Reply frame received for 1 I0811 00:14:52.840154 7 log.go:181] (0xc002ff0f20) (0xc00242ae60) Create stream I0811 00:14:52.840177 7 log.go:181] (0xc002ff0f20) (0xc00242ae60) Stream added, broadcasting: 3 I0811 00:14:52.841314 7 log.go:181] (0xc002ff0f20) Reply frame received for 3 I0811 00:14:52.841356 7 log.go:181] (0xc002ff0f20) (0xc003075180) Create stream I0811 00:14:52.841371 7 log.go:181] (0xc002ff0f20) (0xc003075180) Stream added, broadcasting: 5 I0811 00:14:52.842333 7 log.go:181] (0xc002ff0f20) Reply frame received for 5 I0811 00:14:52.913768 7 log.go:181] (0xc002ff0f20) Data frame received for 5 I0811 00:14:52.913794 7 log.go:181] (0xc003075180) (5) Data frame handling I0811 00:14:52.913823 7 log.go:181] (0xc002ff0f20) Data frame received for 3 I0811 00:14:52.913838 7 log.go:181] (0xc00242ae60) (3) Data frame handling I0811 00:14:52.913853 7 log.go:181] (0xc00242ae60) (3) Data frame sent I0811 00:14:52.913867 7 log.go:181] (0xc002ff0f20) Data frame received for 3 I0811 00:14:52.913879 7 log.go:181] (0xc00242ae60) (3) Data frame handling I0811 00:14:52.915906 7 log.go:181] (0xc002ff0f20) Data frame received for 1 I0811 00:14:52.916012 7 log.go:181] (0xc00242adc0) (1) Data frame handling I0811 00:14:52.916084 7 log.go:181] (0xc00242adc0) (1) Data frame sent I0811 00:14:52.916174 7 log.go:181] (0xc002ff0f20) (0xc00242adc0) Stream removed, broadcasting: 1 I0811 00:14:52.916304 7 log.go:181] (0xc002ff0f20) (0xc00242adc0) Stream removed, broadcasting: 1 I0811 00:14:52.916356 7 log.go:181] (0xc002ff0f20) (0xc00242ae60) Stream removed, broadcasting: 3 I0811 00:14:52.916521 7 log.go:181] (0xc002ff0f20) (0xc003075180) Stream removed, broadcasting: 5 Aug 11 00:14:52.916: INFO: Exec stderr: "" Aug 11 00:14:52.916: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4942 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:14:52.916: INFO: >>> kubeConfig: /root/.kube/config I0811 00:14:52.917429 7 log.go:181] (0xc002ff0f20) Go away received I0811 00:14:52.945780 7 log.go:181] (0xc002ff1550) (0xc00242b0e0) Create stream I0811 00:14:52.945806 7 log.go:181] (0xc002ff1550) (0xc00242b0e0) Stream added, broadcasting: 1 I0811 00:14:52.947657 7 log.go:181] (0xc002ff1550) Reply frame received for 1 I0811 00:14:52.947705 7 log.go:181] (0xc002ff1550) (0xc002a2a000) Create stream I0811 00:14:52.947719 7 log.go:181] (0xc002ff1550) (0xc002a2a000) Stream added, broadcasting: 3 I0811 00:14:52.948604 7 log.go:181] (0xc002ff1550) Reply frame received for 3 I0811 00:14:52.948629 7 log.go:181] (0xc002ff1550) (0xc002a2a0a0) Create stream I0811 00:14:52.948639 7 log.go:181] (0xc002ff1550) (0xc002a2a0a0) Stream added, broadcasting: 5 I0811 00:14:52.949690 7 log.go:181] (0xc002ff1550) Reply frame received for 5 I0811 00:14:53.025028 7 log.go:181] (0xc002ff1550) Data frame received for 5 I0811 00:14:53.025061 7 log.go:181] (0xc002a2a0a0) (5) Data frame handling I0811 00:14:53.025081 7 log.go:181] (0xc002ff1550) Data frame received for 3 I0811 00:14:53.025094 7 log.go:181] (0xc002a2a000) (3) Data frame handling I0811 00:14:53.025108 7 log.go:181] (0xc002a2a000) (3) Data frame sent I0811 00:14:53.025116 7 log.go:181] (0xc002ff1550) Data frame received for 3 I0811 00:14:53.025125 7 log.go:181] (0xc002a2a000) (3) Data frame handling I0811 00:14:53.026587 7 log.go:181] (0xc002ff1550) Data frame received for 1 I0811 00:14:53.026652 7 log.go:181] (0xc00242b0e0) (1) Data frame handling I0811 00:14:53.026689 7 log.go:181] (0xc00242b0e0) (1) Data frame sent I0811 00:14:53.026735 7 log.go:181] (0xc002ff1550) (0xc00242b0e0) Stream removed, broadcasting: 1 I0811 00:14:53.026766 7 log.go:181] (0xc002ff1550) Go away received I0811 00:14:53.026899 7 log.go:181] (0xc002ff1550) (0xc00242b0e0) Stream removed, broadcasting: 1 I0811 00:14:53.026923 7 log.go:181] (0xc002ff1550) (0xc002a2a000) Stream removed, broadcasting: 3 I0811 00:14:53.026993 7 log.go:181] (0xc002ff1550) (0xc002a2a0a0) Stream removed, broadcasting: 5 Aug 11 00:14:53.027: INFO: Exec stderr: "" Aug 11 00:14:53.027: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4942 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:14:53.027: INFO: >>> kubeConfig: /root/.kube/config I0811 00:14:53.050819 7 log.go:181] (0xc002a34370) (0xc002a2a3c0) Create stream I0811 00:14:53.050856 7 log.go:181] (0xc002a34370) (0xc002a2a3c0) Stream added, broadcasting: 1 I0811 00:14:53.052580 7 log.go:181] (0xc002a34370) Reply frame received for 1 I0811 00:14:53.052620 7 log.go:181] (0xc002a34370) (0xc002a2a460) Create stream I0811 00:14:53.052632 7 log.go:181] (0xc002a34370) (0xc002a2a460) Stream added, broadcasting: 3 I0811 00:14:53.053776 7 log.go:181] (0xc002a34370) Reply frame received for 3 I0811 00:14:53.053802 7 log.go:181] (0xc002a34370) (0xc002a2a500) Create stream I0811 00:14:53.053813 7 log.go:181] (0xc002a34370) (0xc002a2a500) Stream added, broadcasting: 5 I0811 00:14:53.054581 7 log.go:181] (0xc002a34370) Reply frame received for 5 I0811 00:14:53.110653 7 log.go:181] (0xc002a34370) Data frame received for 3 I0811 00:14:53.110702 7 log.go:181] (0xc002a2a460) (3) Data frame handling I0811 00:14:53.110713 7 log.go:181] (0xc002a2a460) (3) Data frame sent I0811 00:14:53.110732 7 log.go:181] (0xc002a34370) Data frame received for 3 I0811 00:14:53.110740 7 log.go:181] (0xc002a2a460) (3) Data frame handling I0811 00:14:53.110764 7 log.go:181] (0xc002a34370) Data frame received for 5 I0811 00:14:53.110773 7 log.go:181] (0xc002a2a500) (5) Data frame handling I0811 00:14:53.112230 7 log.go:181] (0xc002a34370) Data frame received for 1 I0811 00:14:53.112262 7 log.go:181] (0xc002a2a3c0) (1) Data frame handling I0811 00:14:53.112289 7 log.go:181] (0xc002a2a3c0) (1) Data frame sent I0811 00:14:53.112305 7 log.go:181] (0xc002a34370) (0xc002a2a3c0) Stream removed, broadcasting: 1 I0811 00:14:53.112322 7 log.go:181] (0xc002a34370) Go away received I0811 00:14:53.112404 7 log.go:181] (0xc002a34370) (0xc002a2a3c0) Stream removed, broadcasting: 1 I0811 00:14:53.112418 7 log.go:181] (0xc002a34370) (0xc002a2a460) Stream removed, broadcasting: 3 I0811 00:14:53.112423 7 log.go:181] (0xc002a34370) (0xc002a2a500) Stream removed, broadcasting: 5 Aug 11 00:14:53.112: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 11 00:14:53.112: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4942 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:14:53.112: INFO: >>> kubeConfig: /root/.kube/config I0811 00:14:53.142006 7 log.go:181] (0xc00106a6e0) (0xc002449180) Create stream I0811 00:14:53.142035 7 log.go:181] (0xc00106a6e0) (0xc002449180) Stream added, broadcasting: 1 I0811 00:14:53.143867 7 log.go:181] (0xc00106a6e0) Reply frame received for 1 I0811 00:14:53.143917 7 log.go:181] (0xc00106a6e0) (0xc002a2a5a0) Create stream I0811 00:14:53.143933 7 log.go:181] (0xc00106a6e0) (0xc002a2a5a0) Stream added, broadcasting: 3 I0811 00:14:53.144988 7 log.go:181] (0xc00106a6e0) Reply frame received for 3 I0811 00:14:53.145025 7 log.go:181] (0xc00106a6e0) (0xc003075220) Create stream I0811 00:14:53.145037 7 log.go:181] (0xc00106a6e0) (0xc003075220) Stream added, broadcasting: 5 I0811 00:14:53.145952 7 log.go:181] (0xc00106a6e0) Reply frame received for 5 I0811 00:14:53.216122 7 log.go:181] (0xc00106a6e0) Data frame received for 5 I0811 00:14:53.216169 7 log.go:181] (0xc003075220) (5) Data frame handling I0811 00:14:53.216204 7 log.go:181] (0xc00106a6e0) Data frame received for 3 I0811 00:14:53.216236 7 log.go:181] (0xc002a2a5a0) (3) Data frame handling I0811 00:14:53.216261 7 log.go:181] (0xc002a2a5a0) (3) Data frame sent I0811 00:14:53.216295 7 log.go:181] (0xc00106a6e0) Data frame received for 3 I0811 00:14:53.216336 7 log.go:181] (0xc002a2a5a0) (3) Data frame handling I0811 00:14:53.217996 7 log.go:181] (0xc00106a6e0) Data frame received for 1 I0811 00:14:53.218016 7 log.go:181] (0xc002449180) (1) Data frame handling I0811 00:14:53.218043 7 log.go:181] (0xc002449180) (1) Data frame sent I0811 00:14:53.218072 7 log.go:181] (0xc00106a6e0) (0xc002449180) Stream removed, broadcasting: 1 I0811 00:14:53.218163 7 log.go:181] (0xc00106a6e0) Go away received I0811 00:14:53.218196 7 log.go:181] (0xc00106a6e0) (0xc002449180) Stream removed, broadcasting: 1 I0811 00:14:53.218221 7 log.go:181] (0xc00106a6e0) (0xc002a2a5a0) Stream removed, broadcasting: 3 I0811 00:14:53.218275 7 log.go:181] (0xc00106a6e0) (0xc003075220) Stream removed, broadcasting: 5 Aug 11 00:14:53.218: INFO: Exec stderr: "" Aug 11 00:14:53.218: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4942 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:14:53.218: INFO: >>> kubeConfig: /root/.kube/config I0811 00:14:53.254078 7 log.go:181] (0xc002ff1ad0) (0xc00242b2c0) Create stream I0811 00:14:53.254112 7 log.go:181] (0xc002ff1ad0) (0xc00242b2c0) Stream added, broadcasting: 1 I0811 00:14:53.255997 7 log.go:181] (0xc002ff1ad0) Reply frame received for 1 I0811 00:14:53.256060 7 log.go:181] (0xc002ff1ad0) (0xc002449220) Create stream I0811 00:14:53.256088 7 log.go:181] (0xc002ff1ad0) (0xc002449220) Stream added, broadcasting: 3 I0811 00:14:53.257377 7 log.go:181] (0xc002ff1ad0) Reply frame received for 3 I0811 00:14:53.257418 7 log.go:181] (0xc002ff1ad0) (0xc002a2a6e0) Create stream I0811 00:14:53.257434 7 log.go:181] (0xc002ff1ad0) (0xc002a2a6e0) Stream added, broadcasting: 5 I0811 00:14:53.258389 7 log.go:181] (0xc002ff1ad0) Reply frame received for 5 I0811 00:14:53.328177 7 log.go:181] (0xc002ff1ad0) Data frame received for 5 I0811 00:14:53.328208 7 log.go:181] (0xc002a2a6e0) (5) Data frame handling I0811 00:14:53.328224 7 log.go:181] (0xc002ff1ad0) Data frame received for 3 I0811 00:14:53.328233 7 log.go:181] (0xc002449220) (3) Data frame handling I0811 00:14:53.328245 7 log.go:181] (0xc002449220) (3) Data frame sent I0811 00:14:53.328251 7 log.go:181] (0xc002ff1ad0) Data frame received for 3 I0811 00:14:53.328256 7 log.go:181] (0xc002449220) (3) Data frame handling I0811 00:14:53.329477 7 log.go:181] (0xc002ff1ad0) Data frame received for 1 I0811 00:14:53.329497 7 log.go:181] (0xc00242b2c0) (1) Data frame handling I0811 00:14:53.329510 7 log.go:181] (0xc00242b2c0) (1) Data frame sent I0811 00:14:53.329521 7 log.go:181] (0xc002ff1ad0) (0xc00242b2c0) Stream removed, broadcasting: 1 I0811 00:14:53.329561 7 log.go:181] (0xc002ff1ad0) Go away received I0811 00:14:53.329605 7 log.go:181] (0xc002ff1ad0) (0xc00242b2c0) Stream removed, broadcasting: 1 I0811 00:14:53.329615 7 log.go:181] (0xc002ff1ad0) (0xc002449220) Stream removed, broadcasting: 3 I0811 00:14:53.329624 7 log.go:181] (0xc002ff1ad0) (0xc002a2a6e0) Stream removed, broadcasting: 5 Aug 11 00:14:53.329: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Aug 11 00:14:53.329: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4942 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:14:53.329: INFO: >>> kubeConfig: /root/.kube/config I0811 00:14:53.358310 7 log.go:181] (0xc00106ad10) (0xc0024494a0) Create stream I0811 00:14:53.358339 7 log.go:181] (0xc00106ad10) (0xc0024494a0) Stream added, broadcasting: 1 I0811 00:14:53.359993 7 log.go:181] (0xc00106ad10) Reply frame received for 1 I0811 00:14:53.360025 7 log.go:181] (0xc00106ad10) (0xc00242b360) Create stream I0811 00:14:53.360040 7 log.go:181] (0xc00106ad10) (0xc00242b360) Stream added, broadcasting: 3 I0811 00:14:53.360925 7 log.go:181] (0xc00106ad10) Reply frame received for 3 I0811 00:14:53.360967 7 log.go:181] (0xc00106ad10) (0xc003075540) Create stream I0811 00:14:53.360979 7 log.go:181] (0xc00106ad10) (0xc003075540) Stream added, broadcasting: 5 I0811 00:14:53.361819 7 log.go:181] (0xc00106ad10) Reply frame received for 5 I0811 00:14:53.417766 7 log.go:181] (0xc00106ad10) Data frame received for 3 I0811 00:14:53.417826 7 log.go:181] (0xc00242b360) (3) Data frame handling I0811 00:14:53.417857 7 log.go:181] (0xc00242b360) (3) Data frame sent I0811 00:14:53.417882 7 log.go:181] (0xc00106ad10) Data frame received for 3 I0811 00:14:53.417912 7 log.go:181] (0xc00242b360) (3) Data frame handling I0811 00:14:53.417945 7 log.go:181] (0xc00106ad10) Data frame received for 5 I0811 00:14:53.418003 7 log.go:181] (0xc003075540) (5) Data frame handling I0811 00:14:53.418952 7 log.go:181] (0xc00106ad10) Data frame received for 1 I0811 00:14:53.418966 7 log.go:181] (0xc0024494a0) (1) Data frame handling I0811 00:14:53.418979 7 log.go:181] (0xc0024494a0) (1) Data frame sent I0811 00:14:53.419173 7 log.go:181] (0xc00106ad10) (0xc0024494a0) Stream removed, broadcasting: 1 I0811 00:14:53.419201 7 log.go:181] (0xc00106ad10) Go away received I0811 00:14:53.419319 7 log.go:181] (0xc00106ad10) (0xc0024494a0) Stream removed, broadcasting: 1 I0811 00:14:53.419346 7 log.go:181] (0xc00106ad10) (0xc00242b360) Stream removed, broadcasting: 3 I0811 00:14:53.419359 7 log.go:181] (0xc00106ad10) (0xc003075540) Stream removed, broadcasting: 5 Aug 11 00:14:53.419: INFO: Exec stderr: "" Aug 11 00:14:53.419: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4942 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:14:53.419: INFO: >>> kubeConfig: /root/.kube/config I0811 00:14:53.447025 7 log.go:181] (0xc002a34a50) (0xc002a2a960) Create stream I0811 00:14:53.447049 7 log.go:181] (0xc002a34a50) (0xc002a2a960) Stream added, broadcasting: 1 I0811 00:14:53.449119 7 log.go:181] (0xc002a34a50) Reply frame received for 1 I0811 00:14:53.449163 7 log.go:181] (0xc002a34a50) (0xc003724fa0) Create stream I0811 00:14:53.449181 7 log.go:181] (0xc002a34a50) (0xc003724fa0) Stream added, broadcasting: 3 I0811 00:14:53.450119 7 log.go:181] (0xc002a34a50) Reply frame received for 3 I0811 00:14:53.450156 7 log.go:181] (0xc002a34a50) (0xc002449540) Create stream I0811 00:14:53.450169 7 log.go:181] (0xc002a34a50) (0xc002449540) Stream added, broadcasting: 5 I0811 00:14:53.451026 7 log.go:181] (0xc002a34a50) Reply frame received for 5 I0811 00:14:53.517490 7 log.go:181] (0xc002a34a50) Data frame received for 5 I0811 00:14:53.517603 7 log.go:181] (0xc002449540) (5) Data frame handling I0811 00:14:53.517647 7 log.go:181] (0xc002a34a50) Data frame received for 3 I0811 00:14:53.517667 7 log.go:181] (0xc003724fa0) (3) Data frame handling I0811 00:14:53.517702 7 log.go:181] (0xc003724fa0) (3) Data frame sent I0811 00:14:53.517728 7 log.go:181] (0xc002a34a50) Data frame received for 3 I0811 00:14:53.517748 7 log.go:181] (0xc003724fa0) (3) Data frame handling I0811 00:14:53.519292 7 log.go:181] (0xc002a34a50) Data frame received for 1 I0811 00:14:53.519317 7 log.go:181] (0xc002a2a960) (1) Data frame handling I0811 00:14:53.519330 7 log.go:181] (0xc002a2a960) (1) Data frame sent I0811 00:14:53.519352 7 log.go:181] (0xc002a34a50) (0xc002a2a960) Stream removed, broadcasting: 1 I0811 00:14:53.519368 7 log.go:181] (0xc002a34a50) Go away received I0811 00:14:53.519463 7 log.go:181] (0xc002a34a50) (0xc002a2a960) Stream removed, broadcasting: 1 I0811 00:14:53.519483 7 log.go:181] (0xc002a34a50) (0xc003724fa0) Stream removed, broadcasting: 3 I0811 00:14:53.519495 7 log.go:181] (0xc002a34a50) (0xc002449540) Stream removed, broadcasting: 5 Aug 11 00:14:53.519: INFO: Exec stderr: "" Aug 11 00:14:53.519: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4942 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:14:53.519: INFO: >>> kubeConfig: /root/.kube/config I0811 00:14:53.548352 7 log.go:181] (0xc00106b340) (0xc002449860) Create stream I0811 00:14:53.548396 7 log.go:181] (0xc00106b340) (0xc002449860) Stream added, broadcasting: 1 I0811 00:14:53.550409 7 log.go:181] (0xc00106b340) Reply frame received for 1 I0811 00:14:53.550442 7 log.go:181] (0xc00106b340) (0xc0037250e0) Create stream I0811 00:14:53.550449 7 log.go:181] (0xc00106b340) (0xc0037250e0) Stream added, broadcasting: 3 I0811 00:14:53.551397 7 log.go:181] (0xc00106b340) Reply frame received for 3 I0811 00:14:53.551442 7 log.go:181] (0xc00106b340) (0xc00242b400) Create stream I0811 00:14:53.551456 7 log.go:181] (0xc00106b340) (0xc00242b400) Stream added, broadcasting: 5 I0811 00:14:53.552402 7 log.go:181] (0xc00106b340) Reply frame received for 5 I0811 00:14:53.618519 7 log.go:181] (0xc00106b340) Data frame received for 5 I0811 00:14:53.618596 7 log.go:181] (0xc00242b400) (5) Data frame handling I0811 00:14:53.618642 7 log.go:181] (0xc00106b340) Data frame received for 3 I0811 00:14:53.618664 7 log.go:181] (0xc0037250e0) (3) Data frame handling I0811 00:14:53.618683 7 log.go:181] (0xc0037250e0) (3) Data frame sent I0811 00:14:53.618695 7 log.go:181] (0xc00106b340) Data frame received for 3 I0811 00:14:53.618720 7 log.go:181] (0xc0037250e0) (3) Data frame handling I0811 00:14:53.619848 7 log.go:181] (0xc00106b340) Data frame received for 1 I0811 00:14:53.619863 7 log.go:181] (0xc002449860) (1) Data frame handling I0811 00:14:53.619876 7 log.go:181] (0xc002449860) (1) Data frame sent I0811 00:14:53.619884 7 log.go:181] (0xc00106b340) (0xc002449860) Stream removed, broadcasting: 1 I0811 00:14:53.619898 7 log.go:181] (0xc00106b340) Go away received I0811 00:14:53.620099 7 log.go:181] (0xc00106b340) (0xc002449860) Stream removed, broadcasting: 1 I0811 00:14:53.620144 7 log.go:181] (0xc00106b340) (0xc0037250e0) Stream removed, broadcasting: 3 I0811 00:14:53.620160 7 log.go:181] (0xc00106b340) (0xc00242b400) Stream removed, broadcasting: 5 Aug 11 00:14:53.620: INFO: Exec stderr: "" Aug 11 00:14:53.620: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4942 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:14:53.620: INFO: >>> kubeConfig: /root/.kube/config I0811 00:14:53.654901 7 log.go:181] (0xc001742790) (0xc003725400) Create stream I0811 00:14:53.654929 7 log.go:181] (0xc001742790) (0xc003725400) Stream added, broadcasting: 1 I0811 00:14:53.656692 7 log.go:181] (0xc001742790) Reply frame received for 1 I0811 00:14:53.656837 7 log.go:181] (0xc001742790) (0xc0024499a0) Create stream I0811 00:14:53.656876 7 log.go:181] (0xc001742790) (0xc0024499a0) Stream added, broadcasting: 3 I0811 00:14:53.657926 7 log.go:181] (0xc001742790) Reply frame received for 3 I0811 00:14:53.657971 7 log.go:181] (0xc001742790) (0xc002449ae0) Create stream I0811 00:14:53.657994 7 log.go:181] (0xc001742790) (0xc002449ae0) Stream added, broadcasting: 5 I0811 00:14:53.658923 7 log.go:181] (0xc001742790) Reply frame received for 5 I0811 00:14:53.722972 7 log.go:181] (0xc001742790) Data frame received for 5 I0811 00:14:53.723021 7 log.go:181] (0xc002449ae0) (5) Data frame handling I0811 00:14:53.723049 7 log.go:181] (0xc001742790) Data frame received for 3 I0811 00:14:53.723063 7 log.go:181] (0xc0024499a0) (3) Data frame handling I0811 00:14:53.723079 7 log.go:181] (0xc0024499a0) (3) Data frame sent I0811 00:14:53.723092 7 log.go:181] (0xc001742790) Data frame received for 3 I0811 00:14:53.723104 7 log.go:181] (0xc0024499a0) (3) Data frame handling I0811 00:14:53.724263 7 log.go:181] (0xc001742790) Data frame received for 1 I0811 00:14:53.724287 7 log.go:181] (0xc003725400) (1) Data frame handling I0811 00:14:53.724310 7 log.go:181] (0xc003725400) (1) Data frame sent I0811 00:14:53.724329 7 log.go:181] (0xc001742790) (0xc003725400) Stream removed, broadcasting: 1 I0811 00:14:53.724352 7 log.go:181] (0xc001742790) Go away received I0811 00:14:53.724461 7 log.go:181] (0xc001742790) (0xc003725400) Stream removed, broadcasting: 1 I0811 00:14:53.724485 7 log.go:181] (0xc001742790) (0xc0024499a0) Stream removed, broadcasting: 3 I0811 00:14:53.724496 7 log.go:181] (0xc001742790) (0xc002449ae0) Stream removed, broadcasting: 5 Aug 11 00:14:53.724: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:14:53.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4942" for this suite. • [SLOW TEST:11.171 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":163,"skipped":2722,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:14:53.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 11 00:14:53.836: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 11 00:14:53.845: INFO: Waiting for terminating namespaces to be deleted... Aug 11 00:14:53.847: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 11 00:14:53.851: INFO: coredns-f9fd979d6-s745j from kube-system started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 11 00:14:53.851: INFO: Container coredns ready: true, restart count 0 Aug 11 00:14:53.851: INFO: coredns-f9fd979d6-zs4sj from kube-system started at 2020-07-19 21:39:36 +0000 UTC (1 container statuses recorded) Aug 11 00:14:53.851: INFO: Container coredns ready: true, restart count 0 Aug 11 00:14:53.851: INFO: kindnet-46dnt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 11 00:14:53.851: INFO: Container kindnet-cni ready: true, restart count 0 Aug 11 00:14:53.851: INFO: kube-proxy-sxpg9 from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 11 00:14:53.851: INFO: Container kube-proxy ready: true, restart count 0 Aug 11 00:14:53.851: INFO: local-path-provisioner-8b46957d4-2gzpd from local-path-storage started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 11 00:14:53.851: INFO: Container local-path-provisioner ready: true, restart count 0 Aug 11 00:14:53.851: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 11 00:14:53.855: INFO: test-host-network-pod from e2e-kubelet-etc-hosts-4942 started at 2020-08-11 00:14:48 +0000 UTC (2 container statuses recorded) Aug 11 00:14:53.855: INFO: Container busybox-1 ready: true, restart count 0 Aug 11 00:14:53.855: INFO: Container busybox-2 ready: true, restart count 0 Aug 11 00:14:53.855: INFO: test-pod from e2e-kubelet-etc-hosts-4942 started at 2020-08-11 00:14:42 +0000 UTC (3 container statuses recorded) Aug 11 00:14:53.856: INFO: Container busybox-1 ready: true, restart count 0 Aug 11 00:14:53.856: INFO: Container busybox-2 ready: true, restart count 0 Aug 11 00:14:53.856: INFO: Container busybox-3 ready: true, restart count 0 Aug 11 00:14:53.856: INFO: kindnet-g6zbt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 11 00:14:53.856: INFO: Container kindnet-cni ready: true, restart count 0 Aug 11 00:14:53.856: INFO: kube-proxy-nsnzn from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 11 00:14:53.856: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.162a0e706c45bae7], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.162a0e706e3e1561], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:14:54.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1230" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":303,"completed":164,"skipped":2756,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:14:54.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Aug 11 00:15:00.104: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:15:01.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6003" for this suite. • [SLOW TEST:6.271 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":303,"completed":165,"skipped":2771,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:15:01.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-2f32a9e5-b2bf-4fd4-afb5-1e2e79c1a398 in namespace container-probe-3317 Aug 11 00:15:05.421: INFO: Started pod liveness-2f32a9e5-b2bf-4fd4-afb5-1e2e79c1a398 in namespace container-probe-3317 STEP: checking the pod's current state and verifying that restartCount is present Aug 11 00:15:05.423: INFO: Initial restart count of pod liveness-2f32a9e5-b2bf-4fd4-afb5-1e2e79c1a398 is 0 Aug 11 00:15:31.483: INFO: Restart count of pod container-probe-3317/liveness-2f32a9e5-b2bf-4fd4-afb5-1e2e79c1a398 is now 1 (26.059701499s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:15:31.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3317" for this suite. • [SLOW TEST:30.463 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":166,"skipped":2838,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:15:31.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-6fb8d9a6-485f-4460-9fba-399fded86673 Aug 11 00:15:32.465: INFO: Pod name my-hostname-basic-6fb8d9a6-485f-4460-9fba-399fded86673: Found 0 pods out of 1 Aug 11 00:15:37.468: INFO: Pod name my-hostname-basic-6fb8d9a6-485f-4460-9fba-399fded86673: Found 1 pods out of 1 Aug 11 00:15:37.469: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-6fb8d9a6-485f-4460-9fba-399fded86673" are running Aug 11 00:15:37.472: INFO: Pod "my-hostname-basic-6fb8d9a6-485f-4460-9fba-399fded86673-p6swx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 00:15:32 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 00:15:36 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 00:15:36 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 00:15:32 +0000 UTC Reason: Message:}]) Aug 11 00:15:37.472: INFO: Trying to dial the pod Aug 11 00:15:42.483: INFO: Controller my-hostname-basic-6fb8d9a6-485f-4460-9fba-399fded86673: Got expected result from replica 1 [my-hostname-basic-6fb8d9a6-485f-4460-9fba-399fded86673-p6swx]: "my-hostname-basic-6fb8d9a6-485f-4460-9fba-399fded86673-p6swx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:15:42.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-739" for this suite. • [SLOW TEST:10.875 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":167,"skipped":2847,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:15:42.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 11 00:15:46.652: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:15:46.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8623" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":168,"skipped":2853,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:15:46.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 11 00:15:46.791: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3adada8e-9f1c-4417-8835-cd1df869571b" in namespace "downward-api-5350" to be "Succeeded or Failed" Aug 11 00:15:46.795: INFO: Pod "downwardapi-volume-3adada8e-9f1c-4417-8835-cd1df869571b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.422428ms Aug 11 00:15:48.816: INFO: Pod "downwardapi-volume-3adada8e-9f1c-4417-8835-cd1df869571b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024881421s Aug 11 00:15:50.820: INFO: Pod "downwardapi-volume-3adada8e-9f1c-4417-8835-cd1df869571b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028958713s STEP: Saw pod success Aug 11 00:15:50.820: INFO: Pod "downwardapi-volume-3adada8e-9f1c-4417-8835-cd1df869571b" satisfied condition "Succeeded or Failed" Aug 11 00:15:50.823: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-3adada8e-9f1c-4417-8835-cd1df869571b container client-container: STEP: delete the pod Aug 11 00:15:50.857: INFO: Waiting for pod downwardapi-volume-3adada8e-9f1c-4417-8835-cd1df869571b to disappear Aug 11 00:15:50.867: INFO: Pod downwardapi-volume-3adada8e-9f1c-4417-8835-cd1df869571b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:15:50.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5350" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":169,"skipped":2860,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:15:50.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:15:51.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1799" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":303,"completed":170,"skipped":2883,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:15:51.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 11 00:15:51.915: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 11 00:15:53.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701751, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701751, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701751, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701751, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 11 00:15:55.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701751, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701751, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701751, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701751, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 11 00:15:58.981: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:16:11.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1474" for this suite. STEP: Destroying namespace "webhook-1474-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.212 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":303,"completed":171,"skipped":2883,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:16:11.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Aug 11 00:16:11.368: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Aug 11 00:16:11.368: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6390' Aug 11 00:16:11.752: INFO: stderr: "" Aug 11 00:16:11.752: INFO: stdout: "service/agnhost-replica created\n" Aug 11 00:16:11.752: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Aug 11 00:16:11.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6390' Aug 11 00:16:12.317: INFO: stderr: "" Aug 11 00:16:12.317: INFO: stdout: "service/agnhost-primary created\n" Aug 11 00:16:12.317: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Aug 11 00:16:12.317: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6390' Aug 11 00:16:12.894: INFO: stderr: "" Aug 11 00:16:12.894: INFO: stdout: "service/frontend created\n" Aug 11 00:16:12.895: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Aug 11 00:16:12.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6390' Aug 11 00:16:13.179: INFO: stderr: "" Aug 11 00:16:13.179: INFO: stdout: "deployment.apps/frontend created\n" Aug 11 00:16:13.179: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 11 00:16:13.179: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6390' Aug 11 00:16:13.504: INFO: stderr: "" Aug 11 00:16:13.504: INFO: stdout: "deployment.apps/agnhost-primary created\n" Aug 11 00:16:13.505: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 11 00:16:13.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6390' Aug 11 00:16:13.780: INFO: stderr: "" Aug 11 00:16:13.780: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Aug 11 00:16:13.780: INFO: Waiting for all frontend pods to be Running. Aug 11 00:16:23.830: INFO: Waiting for frontend to serve content. Aug 11 00:16:23.841: INFO: Trying to add a new entry to the guestbook. Aug 11 00:16:23.852: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Aug 11 00:16:23.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6390' Aug 11 00:16:24.098: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 11 00:16:24.098: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Aug 11 00:16:24.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6390' Aug 11 00:16:24.241: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 11 00:16:24.241: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Aug 11 00:16:24.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6390' Aug 11 00:16:24.392: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 11 00:16:24.392: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 11 00:16:24.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6390' Aug 11 00:16:24.533: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 11 00:16:24.533: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 11 00:16:24.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6390' Aug 11 00:16:24.723: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 11 00:16:24.723: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Aug 11 00:16:24.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6390' Aug 11 00:16:25.440: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 11 00:16:25.441: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:16:25.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6390" for this suite. • [SLOW TEST:14.778 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:351 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":303,"completed":172,"skipped":2911,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:16:26.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 11 00:16:26.948: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d00beff-49e2-4818-b8db-3c14a95286fe" in namespace "projected-1549" to be "Succeeded or Failed" Aug 11 00:16:27.001: INFO: Pod "downwardapi-volume-7d00beff-49e2-4818-b8db-3c14a95286fe": Phase="Pending", Reason="", readiness=false. Elapsed: 52.995383ms Aug 11 00:16:29.006: INFO: Pod "downwardapi-volume-7d00beff-49e2-4818-b8db-3c14a95286fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057565407s Aug 11 00:16:31.031: INFO: Pod "downwardapi-volume-7d00beff-49e2-4818-b8db-3c14a95286fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083103402s Aug 11 00:16:33.036: INFO: Pod "downwardapi-volume-7d00beff-49e2-4818-b8db-3c14a95286fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087584361s STEP: Saw pod success Aug 11 00:16:33.036: INFO: Pod "downwardapi-volume-7d00beff-49e2-4818-b8db-3c14a95286fe" satisfied condition "Succeeded or Failed" Aug 11 00:16:33.039: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-7d00beff-49e2-4818-b8db-3c14a95286fe container client-container: STEP: delete the pod Aug 11 00:16:33.089: INFO: Waiting for pod downwardapi-volume-7d00beff-49e2-4818-b8db-3c14a95286fe to disappear Aug 11 00:16:33.092: INFO: Pod downwardapi-volume-7d00beff-49e2-4818-b8db-3c14a95286fe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:16:33.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1549" for this suite. • [SLOW TEST:7.027 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":173,"skipped":2914,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:16:33.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-7b1f0735-a293-4321-91de-47edb8c2cac5 STEP: Creating configMap with name cm-test-opt-upd-80d41e3a-7885-4850-9b57-75fade2dfaee STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-7b1f0735-a293-4321-91de-47edb8c2cac5 STEP: Updating configmap cm-test-opt-upd-80d41e3a-7885-4850-9b57-75fade2dfaee STEP: Creating configMap with name cm-test-opt-create-0734958d-2f6e-44c6-b357-c09af21f111b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:16:41.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2419" for this suite. • [SLOW TEST:8.208 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":174,"skipped":2920,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:16:41.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 11 00:16:42.031: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 11 00:16:44.065: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701802, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701802, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701802, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701801, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 11 00:16:46.072: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701802, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701802, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701802, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701801, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 11 00:16:49.186: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:16:49.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8991" for this suite. STEP: Destroying namespace "webhook-8991-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.124 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":303,"completed":175,"skipped":2942,"failed":0} S ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:16:49.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:16:49.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-771" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":303,"completed":176,"skipped":2943,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:16:49.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 11 00:16:49.638: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6438ab07-32ca-4422-bf19-ffe198368918" in namespace "projected-5404" to be "Succeeded or Failed" Aug 11 00:16:49.677: INFO: Pod "downwardapi-volume-6438ab07-32ca-4422-bf19-ffe198368918": Phase="Pending", Reason="", readiness=false. Elapsed: 39.189024ms Aug 11 00:16:51.680: INFO: Pod "downwardapi-volume-6438ab07-32ca-4422-bf19-ffe198368918": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04258017s Aug 11 00:16:53.685: INFO: Pod "downwardapi-volume-6438ab07-32ca-4422-bf19-ffe198368918": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046716686s STEP: Saw pod success Aug 11 00:16:53.685: INFO: Pod "downwardapi-volume-6438ab07-32ca-4422-bf19-ffe198368918" satisfied condition "Succeeded or Failed" Aug 11 00:16:53.688: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-6438ab07-32ca-4422-bf19-ffe198368918 container client-container: STEP: delete the pod Aug 11 00:16:53.727: INFO: Waiting for pod downwardapi-volume-6438ab07-32ca-4422-bf19-ffe198368918 to disappear Aug 11 00:16:53.741: INFO: Pod downwardapi-volume-6438ab07-32ca-4422-bf19-ffe198368918 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:16:53.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5404" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":177,"skipped":2996,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:16:53.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Aug 11 00:16:53.825: INFO: namespace kubectl-5894 Aug 11 00:16:53.825: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5894' Aug 11 00:16:54.105: INFO: stderr: "" Aug 11 00:16:54.105: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Aug 11 00:16:55.109: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 00:16:55.109: INFO: Found 0 / 1 Aug 11 00:16:56.150: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 00:16:56.150: INFO: Found 0 / 1 Aug 11 00:16:57.109: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 00:16:57.109: INFO: Found 0 / 1 Aug 11 00:16:58.109: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 00:16:58.109: INFO: Found 1 / 1 Aug 11 00:16:58.109: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 11 00:16:58.113: INFO: Selector matched 1 pods for map[app:agnhost] Aug 11 00:16:58.113: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 11 00:16:58.113: INFO: wait on agnhost-primary startup in kubectl-5894 Aug 11 00:16:58.113: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs agnhost-primary-zw44r agnhost-primary --namespace=kubectl-5894' Aug 11 00:16:58.231: INFO: stderr: "" Aug 11 00:16:58.231: INFO: stdout: "Paused\n" STEP: exposing RC Aug 11 00:16:58.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5894' Aug 11 00:16:58.430: INFO: stderr: "" Aug 11 00:16:58.430: INFO: stdout: "service/rm2 exposed\n" Aug 11 00:16:58.457: INFO: Service rm2 in namespace kubectl-5894 found. STEP: exposing service Aug 11 00:17:00.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5894' Aug 11 00:17:00.603: INFO: stderr: "" Aug 11 00:17:00.603: INFO: stdout: "service/rm3 exposed\n" Aug 11 00:17:00.609: INFO: Service rm3 in namespace kubectl-5894 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:17:02.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5894" for this suite. • [SLOW TEST:8.870 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":303,"completed":178,"skipped":2999,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:17:02.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-2jgk STEP: Creating a pod to test atomic-volume-subpath Aug 11 00:17:02.798: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-2jgk" in namespace "subpath-9664" to be "Succeeded or Failed" Aug 11 00:17:02.831: INFO: Pod "pod-subpath-test-downwardapi-2jgk": Phase="Pending", Reason="", readiness=false. Elapsed: 33.422317ms Aug 11 00:17:04.865: INFO: Pod "pod-subpath-test-downwardapi-2jgk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067101492s Aug 11 00:17:06.869: INFO: Pod "pod-subpath-test-downwardapi-2jgk": Phase="Running", Reason="", readiness=true. Elapsed: 4.07070841s Aug 11 00:17:08.929: INFO: Pod "pod-subpath-test-downwardapi-2jgk": Phase="Running", Reason="", readiness=true. Elapsed: 6.130699045s Aug 11 00:17:10.933: INFO: Pod "pod-subpath-test-downwardapi-2jgk": Phase="Running", Reason="", readiness=true. Elapsed: 8.134769827s Aug 11 00:17:12.937: INFO: Pod "pod-subpath-test-downwardapi-2jgk": Phase="Running", Reason="", readiness=true. Elapsed: 10.138716342s Aug 11 00:17:14.940: INFO: Pod "pod-subpath-test-downwardapi-2jgk": Phase="Running", Reason="", readiness=true. Elapsed: 12.142272799s Aug 11 00:17:16.945: INFO: Pod "pod-subpath-test-downwardapi-2jgk": Phase="Running", Reason="", readiness=true. Elapsed: 14.14690347s Aug 11 00:17:18.970: INFO: Pod "pod-subpath-test-downwardapi-2jgk": Phase="Running", Reason="", readiness=true. Elapsed: 16.172560713s Aug 11 00:17:20.974: INFO: Pod "pod-subpath-test-downwardapi-2jgk": Phase="Running", Reason="", readiness=true. Elapsed: 18.176388876s Aug 11 00:17:22.979: INFO: Pod "pod-subpath-test-downwardapi-2jgk": Phase="Running", Reason="", readiness=true. Elapsed: 20.180808084s Aug 11 00:17:24.982: INFO: Pod "pod-subpath-test-downwardapi-2jgk": Phase="Running", Reason="", readiness=true. Elapsed: 22.184594424s Aug 11 00:17:27.067: INFO: Pod "pod-subpath-test-downwardapi-2jgk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.268942815s STEP: Saw pod success Aug 11 00:17:27.067: INFO: Pod "pod-subpath-test-downwardapi-2jgk" satisfied condition "Succeeded or Failed" Aug 11 00:17:27.070: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-2jgk container test-container-subpath-downwardapi-2jgk: STEP: delete the pod Aug 11 00:17:27.162: INFO: Waiting for pod pod-subpath-test-downwardapi-2jgk to disappear Aug 11 00:17:27.228: INFO: Pod pod-subpath-test-downwardapi-2jgk no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-2jgk Aug 11 00:17:27.228: INFO: Deleting pod "pod-subpath-test-downwardapi-2jgk" in namespace "subpath-9664" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:17:27.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9664" for this suite. • [SLOW TEST:24.612 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":303,"completed":179,"skipped":3002,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:17:27.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Aug 11 00:17:27.380: INFO: Waiting up to 5m0s for pod "client-containers-58d7a47f-2533-435c-bce0-014658438136" in namespace "containers-4123" to be "Succeeded or Failed" Aug 11 00:17:27.395: INFO: Pod "client-containers-58d7a47f-2533-435c-bce0-014658438136": Phase="Pending", Reason="", readiness=false. Elapsed: 14.451009ms Aug 11 00:17:29.498: INFO: Pod "client-containers-58d7a47f-2533-435c-bce0-014658438136": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117254476s Aug 11 00:17:31.509: INFO: Pod "client-containers-58d7a47f-2533-435c-bce0-014658438136": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128632008s STEP: Saw pod success Aug 11 00:17:31.509: INFO: Pod "client-containers-58d7a47f-2533-435c-bce0-014658438136" satisfied condition "Succeeded or Failed" Aug 11 00:17:31.511: INFO: Trying to get logs from node latest-worker2 pod client-containers-58d7a47f-2533-435c-bce0-014658438136 container test-container: STEP: delete the pod Aug 11 00:17:31.550: INFO: Waiting for pod client-containers-58d7a47f-2533-435c-bce0-014658438136 to disappear Aug 11 00:17:31.565: INFO: Pod client-containers-58d7a47f-2533-435c-bce0-014658438136 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:17:31.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4123" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":303,"completed":180,"skipped":3013,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:17:31.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:17:31.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5992" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":303,"completed":181,"skipped":3070,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:17:31.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 11 00:17:32.431: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 11 00:17:34.481: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701852, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701852, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701852, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701852, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 11 00:17:37.588: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:17:37.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-686" for this suite. STEP: Destroying namespace "webhook-686-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.225 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":303,"completed":182,"skipped":3162,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:17:37.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Aug 11 00:17:38.006: INFO: Waiting up to 5m0s for pod "var-expansion-bd964b65-63d5-4b13-8913-d948e50be333" in namespace "var-expansion-3373" to be "Succeeded or Failed" Aug 11 00:17:38.009: INFO: Pod "var-expansion-bd964b65-63d5-4b13-8913-d948e50be333": Phase="Pending", Reason="", readiness=false. Elapsed: 3.303539ms Aug 11 00:17:40.012: INFO: Pod "var-expansion-bd964b65-63d5-4b13-8913-d948e50be333": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006751221s Aug 11 00:17:42.016: INFO: Pod "var-expansion-bd964b65-63d5-4b13-8913-d948e50be333": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010552691s STEP: Saw pod success Aug 11 00:17:42.016: INFO: Pod "var-expansion-bd964b65-63d5-4b13-8913-d948e50be333" satisfied condition "Succeeded or Failed" Aug 11 00:17:42.019: INFO: Trying to get logs from node latest-worker2 pod var-expansion-bd964b65-63d5-4b13-8913-d948e50be333 container dapi-container: STEP: delete the pod Aug 11 00:17:42.044: INFO: Waiting for pod var-expansion-bd964b65-63d5-4b13-8913-d948e50be333 to disappear Aug 11 00:17:42.078: INFO: Pod var-expansion-bd964b65-63d5-4b13-8913-d948e50be333 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:17:42.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3373" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":303,"completed":183,"skipped":3166,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:17:42.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 11 00:17:42.345: INFO: Waiting up to 5m0s for pod "pod-5239fcfc-2cc1-4684-8b68-69c76fb6623e" in namespace "emptydir-8298" to be "Succeeded or Failed" Aug 11 00:17:42.408: INFO: Pod "pod-5239fcfc-2cc1-4684-8b68-69c76fb6623e": Phase="Pending", Reason="", readiness=false. Elapsed: 62.706225ms Aug 11 00:17:44.444: INFO: Pod "pod-5239fcfc-2cc1-4684-8b68-69c76fb6623e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098996375s Aug 11 00:17:46.448: INFO: Pod "pod-5239fcfc-2cc1-4684-8b68-69c76fb6623e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103185259s STEP: Saw pod success Aug 11 00:17:46.448: INFO: Pod "pod-5239fcfc-2cc1-4684-8b68-69c76fb6623e" satisfied condition "Succeeded or Failed" Aug 11 00:17:46.451: INFO: Trying to get logs from node latest-worker2 pod pod-5239fcfc-2cc1-4684-8b68-69c76fb6623e container test-container: STEP: delete the pod Aug 11 00:17:46.512: INFO: Waiting for pod pod-5239fcfc-2cc1-4684-8b68-69c76fb6623e to disappear Aug 11 00:17:46.529: INFO: Pod pod-5239fcfc-2cc1-4684-8b68-69c76fb6623e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:17:46.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8298" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":184,"skipped":3201,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:17:46.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Aug 11 00:17:46.585: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:18:04.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1298" for this suite. • [SLOW TEST:18.041 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":303,"completed":185,"skipped":3203,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:18:04.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-c7b45eff-fba8-49f8-9da1-2fffb9af085d STEP: Creating a pod to test consume secrets Aug 11 00:18:04.813: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-802a315c-01d6-4b3f-9d86-7aeb53752984" in namespace "projected-550" to be "Succeeded or Failed" Aug 11 00:18:04.815: INFO: Pod "pod-projected-secrets-802a315c-01d6-4b3f-9d86-7aeb53752984": Phase="Pending", Reason="", readiness=false. Elapsed: 2.937108ms Aug 11 00:18:06.820: INFO: Pod "pod-projected-secrets-802a315c-01d6-4b3f-9d86-7aeb53752984": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007246558s Aug 11 00:18:08.824: INFO: Pod "pod-projected-secrets-802a315c-01d6-4b3f-9d86-7aeb53752984": Phase="Running", Reason="", readiness=true. Elapsed: 4.011144386s Aug 11 00:18:10.827: INFO: Pod "pod-projected-secrets-802a315c-01d6-4b3f-9d86-7aeb53752984": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014827218s STEP: Saw pod success Aug 11 00:18:10.827: INFO: Pod "pod-projected-secrets-802a315c-01d6-4b3f-9d86-7aeb53752984" satisfied condition "Succeeded or Failed" Aug 11 00:18:10.830: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-802a315c-01d6-4b3f-9d86-7aeb53752984 container projected-secret-volume-test: STEP: delete the pod Aug 11 00:18:10.847: INFO: Waiting for pod pod-projected-secrets-802a315c-01d6-4b3f-9d86-7aeb53752984 to disappear Aug 11 00:18:10.852: INFO: Pod pod-projected-secrets-802a315c-01d6-4b3f-9d86-7aeb53752984 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:18:10.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-550" for this suite. • [SLOW TEST:6.282 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":186,"skipped":3221,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:18:10.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 11 00:18:11.427: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 11 00:18:13.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701891, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701891, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701891, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701891, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 11 00:18:15.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701891, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701891, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701891, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732701891, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 11 00:18:18.469: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:18:18.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-668" for this suite. STEP: Destroying namespace "webhook-668-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.041 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":303,"completed":187,"skipped":3230,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:18:18.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8589 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8589 STEP: creating replication controller externalsvc in namespace services-8589 I0811 00:18:19.084665 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8589, replica count: 2 I0811 00:18:22.135253 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0811 00:18:25.135523 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Aug 11 00:18:25.211: INFO: Creating new exec pod Aug 11 00:18:29.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-8589 execpodpr6c5 -- /bin/sh -x -c nslookup clusterip-service.services-8589.svc.cluster.local' Aug 11 00:18:29.549: INFO: stderr: "I0811 00:18:29.439023 2631 log.go:181] (0xc00074afd0) (0xc000bd3ae0) Create stream\nI0811 00:18:29.439099 2631 log.go:181] (0xc00074afd0) (0xc000bd3ae0) Stream added, broadcasting: 1\nI0811 00:18:29.442032 2631 log.go:181] (0xc00074afd0) Reply frame received for 1\nI0811 00:18:29.442071 2631 log.go:181] (0xc00074afd0) (0xc0009eee60) Create stream\nI0811 00:18:29.442099 2631 log.go:181] (0xc00074afd0) (0xc0009eee60) Stream added, broadcasting: 3\nI0811 00:18:29.443079 2631 log.go:181] (0xc00074afd0) Reply frame received for 3\nI0811 00:18:29.443120 2631 log.go:181] (0xc00074afd0) (0xc000982b40) Create stream\nI0811 00:18:29.443142 2631 log.go:181] (0xc00074afd0) (0xc000982b40) Stream added, broadcasting: 5\nI0811 00:18:29.443941 2631 log.go:181] (0xc00074afd0) Reply frame received for 5\nI0811 00:18:29.532246 2631 log.go:181] (0xc00074afd0) Data frame received for 5\nI0811 00:18:29.532268 2631 log.go:181] (0xc000982b40) (5) Data frame handling\nI0811 00:18:29.532280 2631 log.go:181] (0xc000982b40) (5) Data frame sent\n+ nslookup clusterip-service.services-8589.svc.cluster.local\nI0811 00:18:29.541158 2631 log.go:181] (0xc00074afd0) Data frame received for 3\nI0811 00:18:29.541187 2631 log.go:181] (0xc0009eee60) (3) Data frame handling\nI0811 00:18:29.541210 2631 log.go:181] (0xc0009eee60) (3) Data frame sent\nI0811 00:18:29.541656 2631 log.go:181] (0xc00074afd0) Data frame received for 3\nI0811 00:18:29.541678 2631 log.go:181] (0xc0009eee60) (3) Data frame handling\nI0811 00:18:29.541695 2631 log.go:181] (0xc0009eee60) (3) Data frame sent\nI0811 00:18:29.542133 2631 log.go:181] (0xc00074afd0) Data frame received for 5\nI0811 00:18:29.542160 2631 log.go:181] (0xc000982b40) (5) Data frame handling\nI0811 00:18:29.542195 2631 log.go:181] (0xc00074afd0) Data frame received for 3\nI0811 00:18:29.542211 2631 log.go:181] (0xc0009eee60) (3) Data frame handling\nI0811 00:18:29.544320 2631 log.go:181] (0xc00074afd0) Data frame received for 1\nI0811 00:18:29.544341 2631 log.go:181] (0xc000bd3ae0) (1) Data frame handling\nI0811 00:18:29.544349 2631 log.go:181] (0xc000bd3ae0) (1) Data frame sent\nI0811 00:18:29.544377 2631 log.go:181] (0xc00074afd0) (0xc000bd3ae0) Stream removed, broadcasting: 1\nI0811 00:18:29.544396 2631 log.go:181] (0xc00074afd0) Go away received\nI0811 00:18:29.544864 2631 log.go:181] (0xc00074afd0) (0xc000bd3ae0) Stream removed, broadcasting: 1\nI0811 00:18:29.544890 2631 log.go:181] (0xc00074afd0) (0xc0009eee60) Stream removed, broadcasting: 3\nI0811 00:18:29.544900 2631 log.go:181] (0xc00074afd0) (0xc000982b40) Stream removed, broadcasting: 5\n" Aug 11 00:18:29.549: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-8589.svc.cluster.local\tcanonical name = externalsvc.services-8589.svc.cluster.local.\nName:\texternalsvc.services-8589.svc.cluster.local\nAddress: 10.105.236.192\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8589, will wait for the garbage collector to delete the pods Aug 11 00:18:29.609: INFO: Deleting ReplicationController externalsvc took: 6.701353ms Aug 11 00:18:30.109: INFO: Terminating ReplicationController externalsvc pods took: 500.222096ms Aug 11 00:18:44.002: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:18:44.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8589" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:25.134 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":303,"completed":188,"skipped":3241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:18:44.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4090.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4090.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4090.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4090.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4090.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4090.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4090.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4090.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4090.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4090.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4090.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 109.111.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.111.109_udp@PTR;check="$$(dig +tcp +noall +answer +search 109.111.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.111.109_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4090.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4090.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4090.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4090.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4090.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4090.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4090.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4090.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4090.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4090.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4090.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 109.111.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.111.109_udp@PTR;check="$$(dig +tcp +noall +answer +search 109.111.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.111.109_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 11 00:18:50.211: INFO: Unable to read wheezy_udp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:18:50.215: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:18:50.219: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:18:50.227: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:18:50.252: INFO: Unable to read jessie_udp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:18:50.255: INFO: Unable to read jessie_tcp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:18:50.258: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:18:50.261: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:18:50.282: INFO: Lookups using dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f failed for: [wheezy_udp@dns-test-service.dns-4090.svc.cluster.local wheezy_tcp@dns-test-service.dns-4090.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local jessie_udp@dns-test-service.dns-4090.svc.cluster.local jessie_tcp@dns-test-service.dns-4090.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local] Aug 11 00:18:55.287: INFO: Unable to read wheezy_udp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:18:55.291: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:18:55.295: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:18:55.298: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:18:55.317: INFO: Unable to read jessie_udp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:18:55.321: INFO: Unable to read jessie_tcp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:18:55.324: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:18:55.327: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:18:55.349: INFO: Lookups using dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f failed for: [wheezy_udp@dns-test-service.dns-4090.svc.cluster.local wheezy_tcp@dns-test-service.dns-4090.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local jessie_udp@dns-test-service.dns-4090.svc.cluster.local jessie_tcp@dns-test-service.dns-4090.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local] Aug 11 00:19:00.287: INFO: Unable to read wheezy_udp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:00.291: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:00.294: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:00.296: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:00.314: INFO: Unable to read jessie_udp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:00.317: INFO: Unable to read jessie_tcp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:00.320: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:00.323: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:00.342: INFO: Lookups using dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f failed for: [wheezy_udp@dns-test-service.dns-4090.svc.cluster.local wheezy_tcp@dns-test-service.dns-4090.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local jessie_udp@dns-test-service.dns-4090.svc.cluster.local jessie_tcp@dns-test-service.dns-4090.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local] Aug 11 00:19:05.287: INFO: Unable to read wheezy_udp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:05.292: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:05.295: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:05.298: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:05.318: INFO: Unable to read jessie_udp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:05.321: INFO: Unable to read jessie_tcp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:05.323: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:05.326: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:05.345: INFO: Lookups using dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f failed for: [wheezy_udp@dns-test-service.dns-4090.svc.cluster.local wheezy_tcp@dns-test-service.dns-4090.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local jessie_udp@dns-test-service.dns-4090.svc.cluster.local jessie_tcp@dns-test-service.dns-4090.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local] Aug 11 00:19:10.287: INFO: Unable to read wheezy_udp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:10.290: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:10.293: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:10.296: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:10.318: INFO: Unable to read jessie_udp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:10.320: INFO: Unable to read jessie_tcp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:10.323: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:10.326: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:10.343: INFO: Lookups using dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f failed for: [wheezy_udp@dns-test-service.dns-4090.svc.cluster.local wheezy_tcp@dns-test-service.dns-4090.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local jessie_udp@dns-test-service.dns-4090.svc.cluster.local jessie_tcp@dns-test-service.dns-4090.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local] Aug 11 00:19:15.286: INFO: Unable to read wheezy_udp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:15.288: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:15.291: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:15.294: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:15.313: INFO: Unable to read jessie_udp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:15.315: INFO: Unable to read jessie_tcp@dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:15.317: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:15.320: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local from pod dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f: the server could not find the requested resource (get pods dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f) Aug 11 00:19:15.339: INFO: Lookups using dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f failed for: [wheezy_udp@dns-test-service.dns-4090.svc.cluster.local wheezy_tcp@dns-test-service.dns-4090.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local jessie_udp@dns-test-service.dns-4090.svc.cluster.local jessie_tcp@dns-test-service.dns-4090.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4090.svc.cluster.local] Aug 11 00:19:20.344: INFO: DNS probes using dns-4090/dns-test-9c8b2194-ebe7-4e93-809b-fd872ae65d2f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:19:21.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4090" for this suite. • [SLOW TEST:37.153 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":303,"completed":189,"skipped":3305,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:19:21.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-e917777e-ae2a-4478-bba4-b22a28798161 STEP: Creating a pod to test consume secrets Aug 11 00:19:21.255: INFO: Waiting up to 5m0s for pod "pod-secrets-bee6785e-9e70-4066-a14e-d40a9d60b421" in namespace "secrets-7423" to be "Succeeded or Failed" Aug 11 00:19:21.295: INFO: Pod "pod-secrets-bee6785e-9e70-4066-a14e-d40a9d60b421": Phase="Pending", Reason="", readiness=false. Elapsed: 40.569344ms Aug 11 00:19:23.298: INFO: Pod "pod-secrets-bee6785e-9e70-4066-a14e-d40a9d60b421": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043490306s Aug 11 00:19:25.511: INFO: Pod "pod-secrets-bee6785e-9e70-4066-a14e-d40a9d60b421": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.256506641s STEP: Saw pod success Aug 11 00:19:25.511: INFO: Pod "pod-secrets-bee6785e-9e70-4066-a14e-d40a9d60b421" satisfied condition "Succeeded or Failed" Aug 11 00:19:25.514: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-bee6785e-9e70-4066-a14e-d40a9d60b421 container secret-env-test: STEP: delete the pod Aug 11 00:19:25.673: INFO: Waiting for pod pod-secrets-bee6785e-9e70-4066-a14e-d40a9d60b421 to disappear Aug 11 00:19:25.684: INFO: Pod pod-secrets-bee6785e-9e70-4066-a14e-d40a9d60b421 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:19:25.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7423" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":303,"completed":190,"skipped":3308,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:19:25.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-d8f7be75-1ac0-4b64-9e2f-95d6d99f12a8 STEP: Creating a pod to test consume configMaps Aug 11 00:19:25.807: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-17db2aa0-9d18-45fd-be51-05b74a7aa395" in namespace "projected-1310" to be "Succeeded or Failed" Aug 11 00:19:25.818: INFO: Pod "pod-projected-configmaps-17db2aa0-9d18-45fd-be51-05b74a7aa395": Phase="Pending", Reason="", readiness=false. Elapsed: 11.491281ms Aug 11 00:19:27.823: INFO: Pod "pod-projected-configmaps-17db2aa0-9d18-45fd-be51-05b74a7aa395": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01595363s Aug 11 00:19:29.827: INFO: Pod "pod-projected-configmaps-17db2aa0-9d18-45fd-be51-05b74a7aa395": Phase="Running", Reason="", readiness=true. Elapsed: 4.019908923s Aug 11 00:19:31.830: INFO: Pod "pod-projected-configmaps-17db2aa0-9d18-45fd-be51-05b74a7aa395": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023616744s STEP: Saw pod success Aug 11 00:19:31.830: INFO: Pod "pod-projected-configmaps-17db2aa0-9d18-45fd-be51-05b74a7aa395" satisfied condition "Succeeded or Failed" Aug 11 00:19:31.833: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-17db2aa0-9d18-45fd-be51-05b74a7aa395 container projected-configmap-volume-test: STEP: delete the pod Aug 11 00:19:31.851: INFO: Waiting for pod pod-projected-configmaps-17db2aa0-9d18-45fd-be51-05b74a7aa395 to disappear Aug 11 00:19:31.918: INFO: Pod pod-projected-configmaps-17db2aa0-9d18-45fd-be51-05b74a7aa395 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:19:31.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1310" for this suite. • [SLOW TEST:6.244 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":191,"skipped":3316,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:19:31.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Aug 11 00:19:31.991: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Aug 11 00:19:44.490: INFO: >>> kubeConfig: /root/.kube/config Aug 11 00:19:47.497: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:19:58.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7397" for this suite. • [SLOW TEST:26.554 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":303,"completed":192,"skipped":3322,"failed":0} [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:19:58.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:19:58.608: INFO: The status of Pod test-webserver-821ca100-0f25-4fc8-86b3-a241e16cf442 is Pending, waiting for it to be Running (with Ready = true) Aug 11 00:20:00.612: INFO: The status of Pod test-webserver-821ca100-0f25-4fc8-86b3-a241e16cf442 is Pending, waiting for it to be Running (with Ready = true) Aug 11 00:20:02.612: INFO: The status of Pod test-webserver-821ca100-0f25-4fc8-86b3-a241e16cf442 is Running (Ready = false) Aug 11 00:20:04.612: INFO: The status of Pod test-webserver-821ca100-0f25-4fc8-86b3-a241e16cf442 is Running (Ready = false) Aug 11 00:20:06.612: INFO: The status of Pod test-webserver-821ca100-0f25-4fc8-86b3-a241e16cf442 is Running (Ready = false) Aug 11 00:20:08.611: INFO: The status of Pod test-webserver-821ca100-0f25-4fc8-86b3-a241e16cf442 is Running (Ready = false) Aug 11 00:20:10.612: INFO: The status of Pod test-webserver-821ca100-0f25-4fc8-86b3-a241e16cf442 is Running (Ready = false) Aug 11 00:20:12.626: INFO: The status of Pod test-webserver-821ca100-0f25-4fc8-86b3-a241e16cf442 is Running (Ready = false) Aug 11 00:20:14.612: INFO: The status of Pod test-webserver-821ca100-0f25-4fc8-86b3-a241e16cf442 is Running (Ready = false) Aug 11 00:20:16.612: INFO: The status of Pod test-webserver-821ca100-0f25-4fc8-86b3-a241e16cf442 is Running (Ready = false) Aug 11 00:20:18.612: INFO: The status of Pod test-webserver-821ca100-0f25-4fc8-86b3-a241e16cf442 is Running (Ready = true) Aug 11 00:20:18.615: INFO: Container started at 2020-08-11 00:20:01 +0000 UTC, pod became ready at 2020-08-11 00:20:17 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:20:18.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6557" for this suite. • [SLOW TEST:20.135 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":303,"completed":193,"skipped":3322,"failed":0} [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:20:18.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Aug 11 00:22:19.246: INFO: Successfully updated pod "var-expansion-b88997e6-96fe-46cf-95d2-139de6a7c24b" STEP: waiting for pod running STEP: deleting the pod gracefully Aug 11 00:22:23.279: INFO: Deleting pod "var-expansion-b88997e6-96fe-46cf-95d2-139de6a7c24b" in namespace "var-expansion-3117" Aug 11 00:22:23.284: INFO: Wait up to 5m0s for pod "var-expansion-b88997e6-96fe-46cf-95d2-139de6a7c24b" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:22:57.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3117" for this suite. • [SLOW TEST:158.692 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":303,"completed":194,"skipped":3322,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:22:57.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 11 00:22:57.416: INFO: Waiting up to 5m0s for pod "pod-1b72f0d5-79ab-462b-b050-89a1b05916bf" in namespace "emptydir-987" to be "Succeeded or Failed" Aug 11 00:22:57.428: INFO: Pod "pod-1b72f0d5-79ab-462b-b050-89a1b05916bf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.068551ms Aug 11 00:22:59.432: INFO: Pod "pod-1b72f0d5-79ab-462b-b050-89a1b05916bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016269706s Aug 11 00:23:01.449: INFO: Pod "pod-1b72f0d5-79ab-462b-b050-89a1b05916bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033388972s STEP: Saw pod success Aug 11 00:23:01.449: INFO: Pod "pod-1b72f0d5-79ab-462b-b050-89a1b05916bf" satisfied condition "Succeeded or Failed" Aug 11 00:23:01.452: INFO: Trying to get logs from node latest-worker2 pod pod-1b72f0d5-79ab-462b-b050-89a1b05916bf container test-container: STEP: delete the pod Aug 11 00:23:01.493: INFO: Waiting for pod pod-1b72f0d5-79ab-462b-b050-89a1b05916bf to disappear Aug 11 00:23:01.502: INFO: Pod pod-1b72f0d5-79ab-462b-b050-89a1b05916bf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:23:01.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-987" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":195,"skipped":3330,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:23:01.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:23:01.600: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-f8cd6899-fe09-46f6-8aa2-beac24f1211f" in namespace "security-context-test-1444" to be "Succeeded or Failed" Aug 11 00:23:01.815: INFO: Pod "busybox-readonly-false-f8cd6899-fe09-46f6-8aa2-beac24f1211f": Phase="Pending", Reason="", readiness=false. Elapsed: 215.202499ms Aug 11 00:23:03.964: INFO: Pod "busybox-readonly-false-f8cd6899-fe09-46f6-8aa2-beac24f1211f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.364028183s Aug 11 00:23:05.968: INFO: Pod "busybox-readonly-false-f8cd6899-fe09-46f6-8aa2-beac24f1211f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.367974334s Aug 11 00:23:05.968: INFO: Pod "busybox-readonly-false-f8cd6899-fe09-46f6-8aa2-beac24f1211f" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:23:05.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1444" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":303,"completed":196,"skipped":3340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:23:05.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-fe1cc665-491a-4bc1-aee8-0e56b8d3e9cc STEP: Creating a pod to test consume secrets Aug 11 00:23:06.170: INFO: Waiting up to 5m0s for pod "pod-secrets-603cea85-1268-4fb7-9359-5fe3ddfcd2f5" in namespace "secrets-423" to be "Succeeded or Failed" Aug 11 00:23:06.231: INFO: Pod "pod-secrets-603cea85-1268-4fb7-9359-5fe3ddfcd2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 61.001958ms Aug 11 00:23:08.235: INFO: Pod "pod-secrets-603cea85-1268-4fb7-9359-5fe3ddfcd2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06551846s Aug 11 00:23:10.240: INFO: Pod "pod-secrets-603cea85-1268-4fb7-9359-5fe3ddfcd2f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070238503s STEP: Saw pod success Aug 11 00:23:10.240: INFO: Pod "pod-secrets-603cea85-1268-4fb7-9359-5fe3ddfcd2f5" satisfied condition "Succeeded or Failed" Aug 11 00:23:10.243: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-603cea85-1268-4fb7-9359-5fe3ddfcd2f5 container secret-volume-test: STEP: delete the pod Aug 11 00:23:10.294: INFO: Waiting for pod pod-secrets-603cea85-1268-4fb7-9359-5fe3ddfcd2f5 to disappear Aug 11 00:23:10.347: INFO: Pod pod-secrets-603cea85-1268-4fb7-9359-5fe3ddfcd2f5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:23:10.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-423" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":197,"skipped":3410,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:23:10.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 11 00:23:10.502: INFO: Waiting up to 5m0s for pod "pod-4ae49a8f-9e06-4adf-a9d7-5d4e6ab73bcb" in namespace "emptydir-7726" to be "Succeeded or Failed" Aug 11 00:23:10.508: INFO: Pod "pod-4ae49a8f-9e06-4adf-a9d7-5d4e6ab73bcb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.434193ms Aug 11 00:23:12.511: INFO: Pod "pod-4ae49a8f-9e06-4adf-a9d7-5d4e6ab73bcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008988632s Aug 11 00:23:14.516: INFO: Pod "pod-4ae49a8f-9e06-4adf-a9d7-5d4e6ab73bcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013096876s STEP: Saw pod success Aug 11 00:23:14.516: INFO: Pod "pod-4ae49a8f-9e06-4adf-a9d7-5d4e6ab73bcb" satisfied condition "Succeeded or Failed" Aug 11 00:23:14.519: INFO: Trying to get logs from node latest-worker2 pod pod-4ae49a8f-9e06-4adf-a9d7-5d4e6ab73bcb container test-container: STEP: delete the pod Aug 11 00:23:14.534: INFO: Waiting for pod pod-4ae49a8f-9e06-4adf-a9d7-5d4e6ab73bcb to disappear Aug 11 00:23:14.538: INFO: Pod pod-4ae49a8f-9e06-4adf-a9d7-5d4e6ab73bcb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:23:14.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7726" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":198,"skipped":3414,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:23:14.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:23:14.643: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:23:15.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9420" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":303,"completed":199,"skipped":3415,"failed":0} SSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:23:15.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 11 00:23:19.354: INFO: &Pod{ObjectMeta:{send-events-db590a0b-1b6f-4a48-9cdf-8327b5d7faed events-9016 /api/v1/namespaces/events-9016/pods/send-events-db590a0b-1b6f-4a48-9cdf-8327b5d7faed 9d9af26c-f56f-4999-a9fd-a76c2168b0be 6054871 0 2020-08-11 00:23:15 +0000 UTC map[name:foo time:323903653] map[] [] [] [{e2e.test Update v1 2020-08-11 00:23:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:23:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.125\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rkk7c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rkk7c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rkk7c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:23:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:23:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:23:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:23:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.125,StartTime:2020-08-11 00:23:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 00:23:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://0a2c4b9fa1233b2978a564baf150e68a7459215c766be51b92e7ce56c6f82ea6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.125,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Aug 11 00:23:21.359: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 11 00:23:23.363: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:23:23.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9016" for this suite. • [SLOW TEST:8.158 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":303,"completed":200,"skipped":3421,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:23:23.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 11 00:23:23.499: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6aa98985-702f-44c6-ad20-f0d1f8108962" in namespace "downward-api-9796" to be "Succeeded or Failed" Aug 11 00:23:23.515: INFO: Pod "downwardapi-volume-6aa98985-702f-44c6-ad20-f0d1f8108962": Phase="Pending", Reason="", readiness=false. Elapsed: 16.533099ms Aug 11 00:23:25.602: INFO: Pod "downwardapi-volume-6aa98985-702f-44c6-ad20-f0d1f8108962": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102880962s Aug 11 00:23:27.606: INFO: Pod "downwardapi-volume-6aa98985-702f-44c6-ad20-f0d1f8108962": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106851004s STEP: Saw pod success Aug 11 00:23:27.606: INFO: Pod "downwardapi-volume-6aa98985-702f-44c6-ad20-f0d1f8108962" satisfied condition "Succeeded or Failed" Aug 11 00:23:27.609: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-6aa98985-702f-44c6-ad20-f0d1f8108962 container client-container: STEP: delete the pod Aug 11 00:23:27.679: INFO: Waiting for pod downwardapi-volume-6aa98985-702f-44c6-ad20-f0d1f8108962 to disappear Aug 11 00:23:27.838: INFO: Pod downwardapi-volume-6aa98985-702f-44c6-ad20-f0d1f8108962 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:23:27.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9796" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":201,"skipped":3424,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:23:27.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Aug 11 00:23:28.008: INFO: created test-pod-1 Aug 11 00:23:28.013: INFO: created test-pod-2 Aug 11 00:23:28.019: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:23:28.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1285" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":303,"completed":202,"skipped":3446,"failed":0} S ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:23:28.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:23:28.426: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Aug 11 00:23:33.431: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 11 00:23:33.431: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 11 00:23:37.546: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5008 /apis/apps/v1/namespaces/deployment-5008/deployments/test-cleanup-deployment 9be3566c-bd29-4b09-940c-7d7dc50c3283 6055047 1 2020-08-11 00:23:33 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2020-08-11 00:23:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-11 00:23:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004210248 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-11 00:23:33 +0000 UTC,LastTransitionTime:2020-08-11 00:23:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5d446bdd47" has successfully progressed.,LastUpdateTime:2020-08-11 00:23:36 +0000 UTC,LastTransitionTime:2020-08-11 00:23:33 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 11 00:23:37.550: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-5008 /apis/apps/v1/namespaces/deployment-5008/replicasets/test-cleanup-deployment-5d446bdd47 4d2af16d-94c7-40be-a1d5-ea8af95e3c8b 6055036 1 2020-08-11 00:23:33 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 9be3566c-bd29-4b09-940c-7d7dc50c3283 0xc004210697 0xc004210698}] [] [{kube-controller-manager Update apps/v1 2020-08-11 00:23:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9be3566c-bd29-4b09-940c-7d7dc50c3283\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004210728 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 11 00:23:37.553: INFO: Pod "test-cleanup-deployment-5d446bdd47-97vl6" is available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-97vl6 test-cleanup-deployment-5d446bdd47- deployment-5008 /api/v1/namespaces/deployment-5008/pods/test-cleanup-deployment-5d446bdd47-97vl6 ef96d7da-ef75-4e91-a419-956acd2193c2 6055035 0 2020-08-11 00:23:33 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 4d2af16d-94c7-40be-a1d5-ea8af95e3c8b 0xc004210b17 0xc004210b18}] [] [{kube-controller-manager Update v1 2020-08-11 00:23:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4d2af16d-94c7-40be-a1d5-ea8af95e3c8b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:23:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.130\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b9bf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b9bf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b9bf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:23:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:23:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:23:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:23:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.130,StartTime:2020-08-11 00:23:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 00:23:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://f41ab6a36d6deafbdaab09bebb290cd65f89fd9624c156d58340d2e0607a9a63,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.130,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:23:37.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5008" for this suite. • [SLOW TEST:9.205 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":303,"completed":203,"skipped":3447,"failed":0} [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:23:37.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-9e4a9c1b-4ed0-4c7c-a26a-acf6393e749e in namespace container-probe-7596 Aug 11 00:23:41.674: INFO: Started pod busybox-9e4a9c1b-4ed0-4c7c-a26a-acf6393e749e in namespace container-probe-7596 STEP: checking the pod's current state and verifying that restartCount is present Aug 11 00:23:41.677: INFO: Initial restart count of pod busybox-9e4a9c1b-4ed0-4c7c-a26a-acf6393e749e is 0 Aug 11 00:24:37.896: INFO: Restart count of pod container-probe-7596/busybox-9e4a9c1b-4ed0-4c7c-a26a-acf6393e749e is now 1 (56.218606374s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:24:37.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7596" for this suite. • [SLOW TEST:60.416 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":204,"skipped":3447,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Discovery /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:24:37.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:24:38.659: INFO: Checking APIGroup: apiregistration.k8s.io Aug 11 00:24:38.660: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Aug 11 00:24:38.660: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Aug 11 00:24:38.660: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Aug 11 00:24:38.660: INFO: Checking APIGroup: extensions Aug 11 00:24:38.661: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Aug 11 00:24:38.661: INFO: Versions found [{extensions/v1beta1 v1beta1}] Aug 11 00:24:38.661: INFO: extensions/v1beta1 matches extensions/v1beta1 Aug 11 00:24:38.661: INFO: Checking APIGroup: apps Aug 11 00:24:38.662: INFO: PreferredVersion.GroupVersion: apps/v1 Aug 11 00:24:38.662: INFO: Versions found [{apps/v1 v1}] Aug 11 00:24:38.662: INFO: apps/v1 matches apps/v1 Aug 11 00:24:38.662: INFO: Checking APIGroup: events.k8s.io Aug 11 00:24:38.663: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Aug 11 00:24:38.663: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Aug 11 00:24:38.663: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Aug 11 00:24:38.663: INFO: Checking APIGroup: authentication.k8s.io Aug 11 00:24:38.664: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Aug 11 00:24:38.664: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Aug 11 00:24:38.664: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Aug 11 00:24:38.664: INFO: Checking APIGroup: authorization.k8s.io Aug 11 00:24:38.665: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Aug 11 00:24:38.665: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Aug 11 00:24:38.665: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Aug 11 00:24:38.665: INFO: Checking APIGroup: autoscaling Aug 11 00:24:38.666: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Aug 11 00:24:38.666: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Aug 11 00:24:38.666: INFO: autoscaling/v1 matches autoscaling/v1 Aug 11 00:24:38.666: INFO: Checking APIGroup: batch Aug 11 00:24:38.666: INFO: PreferredVersion.GroupVersion: batch/v1 Aug 11 00:24:38.667: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Aug 11 00:24:38.667: INFO: batch/v1 matches batch/v1 Aug 11 00:24:38.667: INFO: Checking APIGroup: certificates.k8s.io Aug 11 00:24:38.667: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Aug 11 00:24:38.667: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Aug 11 00:24:38.667: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Aug 11 00:24:38.667: INFO: Checking APIGroup: networking.k8s.io Aug 11 00:24:38.668: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Aug 11 00:24:38.668: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Aug 11 00:24:38.668: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Aug 11 00:24:38.668: INFO: Checking APIGroup: policy Aug 11 00:24:38.669: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Aug 11 00:24:38.669: INFO: Versions found [{policy/v1beta1 v1beta1}] Aug 11 00:24:38.669: INFO: policy/v1beta1 matches policy/v1beta1 Aug 11 00:24:38.669: INFO: Checking APIGroup: rbac.authorization.k8s.io Aug 11 00:24:38.670: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Aug 11 00:24:38.670: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Aug 11 00:24:38.670: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Aug 11 00:24:38.670: INFO: Checking APIGroup: storage.k8s.io Aug 11 00:24:38.671: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Aug 11 00:24:38.671: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Aug 11 00:24:38.671: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Aug 11 00:24:38.671: INFO: Checking APIGroup: admissionregistration.k8s.io Aug 11 00:24:38.672: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Aug 11 00:24:38.672: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Aug 11 00:24:38.672: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Aug 11 00:24:38.672: INFO: Checking APIGroup: apiextensions.k8s.io Aug 11 00:24:38.672: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Aug 11 00:24:38.673: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Aug 11 00:24:38.673: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Aug 11 00:24:38.673: INFO: Checking APIGroup: scheduling.k8s.io Aug 11 00:24:38.673: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Aug 11 00:24:38.673: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Aug 11 00:24:38.673: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Aug 11 00:24:38.673: INFO: Checking APIGroup: coordination.k8s.io Aug 11 00:24:38.674: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Aug 11 00:24:38.674: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Aug 11 00:24:38.674: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Aug 11 00:24:38.674: INFO: Checking APIGroup: node.k8s.io Aug 11 00:24:38.675: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Aug 11 00:24:38.675: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Aug 11 00:24:38.675: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Aug 11 00:24:38.675: INFO: Checking APIGroup: discovery.k8s.io Aug 11 00:24:38.675: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Aug 11 00:24:38.675: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Aug 11 00:24:38.675: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:24:38.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-3104" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":303,"completed":205,"skipped":3459,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:24:38.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 11 00:24:39.687: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 11 00:24:41.697: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702279, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702279, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702279, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702279, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 11 00:24:44.723: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:24:44.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5892" for this suite. STEP: Destroying namespace "webhook-5892-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.425 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":303,"completed":206,"skipped":3467,"failed":0} [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:24:45.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:24:45.234: INFO: Creating ReplicaSet my-hostname-basic-708abd7a-26db-4cb9-8f26-e42d18fe9151 Aug 11 00:24:45.260: INFO: Pod name my-hostname-basic-708abd7a-26db-4cb9-8f26-e42d18fe9151: Found 0 pods out of 1 Aug 11 00:24:50.264: INFO: Pod name my-hostname-basic-708abd7a-26db-4cb9-8f26-e42d18fe9151: Found 1 pods out of 1 Aug 11 00:24:50.264: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-708abd7a-26db-4cb9-8f26-e42d18fe9151" is running Aug 11 00:24:50.267: INFO: Pod "my-hostname-basic-708abd7a-26db-4cb9-8f26-e42d18fe9151-sgng5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 00:24:45 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 00:24:48 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 00:24:48 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 00:24:45 +0000 UTC Reason: Message:}]) Aug 11 00:24:50.267: INFO: Trying to dial the pod Aug 11 00:24:55.350: INFO: Controller my-hostname-basic-708abd7a-26db-4cb9-8f26-e42d18fe9151: Got expected result from replica 1 [my-hostname-basic-708abd7a-26db-4cb9-8f26-e42d18fe9151-sgng5]: "my-hostname-basic-708abd7a-26db-4cb9-8f26-e42d18fe9151-sgng5", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:24:55.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7139" for this suite. • [SLOW TEST:10.250 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":207,"skipped":3467,"failed":0} SSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:24:55.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:25:09.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5988" for this suite. • [SLOW TEST:14.073 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":303,"completed":208,"skipped":3472,"failed":0} SSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:25:09.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-2393 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2393 to expose endpoints map[] Aug 11 00:25:09.571: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Aug 11 00:25:10.615: INFO: successfully validated that service endpoint-test2 in namespace services-2393 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-2393 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2393 to expose endpoints map[pod1:[80]] Aug 11 00:25:14.710: INFO: successfully validated that service endpoint-test2 in namespace services-2393 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-2393 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2393 to expose endpoints map[pod1:[80] pod2:[80]] Aug 11 00:25:18.857: INFO: successfully validated that service endpoint-test2 in namespace services-2393 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-2393 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2393 to expose endpoints map[pod2:[80]] Aug 11 00:25:18.928: INFO: successfully validated that service endpoint-test2 in namespace services-2393 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-2393 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2393 to expose endpoints map[] Aug 11 00:25:20.019: INFO: successfully validated that service endpoint-test2 in namespace services-2393 exposes endpoints map[] [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:25:20.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2393" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:10.721 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":303,"completed":209,"skipped":3476,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:25:20.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 11 00:25:20.228: INFO: Waiting up to 5m0s for pod "downward-api-49677c7c-2b3b-422d-9676-cde16d339423" in namespace "downward-api-7605" to be "Succeeded or Failed" Aug 11 00:25:20.231: INFO: Pod "downward-api-49677c7c-2b3b-422d-9676-cde16d339423": Phase="Pending", Reason="", readiness=false. Elapsed: 3.460674ms Aug 11 00:25:22.277: INFO: Pod "downward-api-49677c7c-2b3b-422d-9676-cde16d339423": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049240375s Aug 11 00:25:24.314: INFO: Pod "downward-api-49677c7c-2b3b-422d-9676-cde16d339423": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085595373s STEP: Saw pod success Aug 11 00:25:24.314: INFO: Pod "downward-api-49677c7c-2b3b-422d-9676-cde16d339423" satisfied condition "Succeeded or Failed" Aug 11 00:25:24.317: INFO: Trying to get logs from node latest-worker2 pod downward-api-49677c7c-2b3b-422d-9676-cde16d339423 container dapi-container: STEP: delete the pod Aug 11 00:25:24.359: INFO: Waiting for pod downward-api-49677c7c-2b3b-422d-9676-cde16d339423 to disappear Aug 11 00:25:24.363: INFO: Pod downward-api-49677c7c-2b3b-422d-9676-cde16d339423 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:25:24.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7605" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":303,"completed":210,"skipped":3494,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:25:24.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 11 00:25:32.559: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 11 00:25:32.568: INFO: Pod pod-with-poststart-http-hook still exists Aug 11 00:25:34.568: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 11 00:25:34.573: INFO: Pod pod-with-poststart-http-hook still exists Aug 11 00:25:36.568: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 11 00:25:36.573: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:25:36.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1648" for this suite. • [SLOW TEST:12.210 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":303,"completed":211,"skipped":3527,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:25:36.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-3743a6e9-ae23-46fc-b1e7-81572a992ce1 in namespace container-probe-5876 Aug 11 00:25:40.766: INFO: Started pod liveness-3743a6e9-ae23-46fc-b1e7-81572a992ce1 in namespace container-probe-5876 STEP: checking the pod's current state and verifying that restartCount is present Aug 11 00:25:40.769: INFO: Initial restart count of pod liveness-3743a6e9-ae23-46fc-b1e7-81572a992ce1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:29:41.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5876" for this suite. • [SLOW TEST:244.872 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":303,"completed":212,"skipped":3530,"failed":0} SSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:29:41.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-815/configmap-test-ef522e33-f67b-4a41-8217-ac29430ce2d6 STEP: Creating a pod to test consume configMaps Aug 11 00:29:41.798: INFO: Waiting up to 5m0s for pod "pod-configmaps-feaa9c2f-ab64-4acb-949c-5ee96bdd9c39" in namespace "configmap-815" to be "Succeeded or Failed" Aug 11 00:29:42.051: INFO: Pod "pod-configmaps-feaa9c2f-ab64-4acb-949c-5ee96bdd9c39": Phase="Pending", Reason="", readiness=false. Elapsed: 252.763081ms Aug 11 00:29:44.056: INFO: Pod "pod-configmaps-feaa9c2f-ab64-4acb-949c-5ee96bdd9c39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257346362s Aug 11 00:29:46.060: INFO: Pod "pod-configmaps-feaa9c2f-ab64-4acb-949c-5ee96bdd9c39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.26170246s STEP: Saw pod success Aug 11 00:29:46.060: INFO: Pod "pod-configmaps-feaa9c2f-ab64-4acb-949c-5ee96bdd9c39" satisfied condition "Succeeded or Failed" Aug 11 00:29:46.063: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-feaa9c2f-ab64-4acb-949c-5ee96bdd9c39 container env-test: STEP: delete the pod Aug 11 00:29:46.121: INFO: Waiting for pod pod-configmaps-feaa9c2f-ab64-4acb-949c-5ee96bdd9c39 to disappear Aug 11 00:29:46.135: INFO: Pod pod-configmaps-feaa9c2f-ab64-4acb-949c-5ee96bdd9c39 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:29:46.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-815" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":303,"completed":213,"skipped":3538,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:29:46.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-c02f6980-109d-4638-a8ba-fe7dbb04c1cd STEP: Creating a pod to test consume configMaps Aug 11 00:29:46.356: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-26ed3bab-dba8-40fa-b403-7386ac87a467" in namespace "projected-8735" to be "Succeeded or Failed" Aug 11 00:29:46.369: INFO: Pod "pod-projected-configmaps-26ed3bab-dba8-40fa-b403-7386ac87a467": Phase="Pending", Reason="", readiness=false. Elapsed: 13.726067ms Aug 11 00:29:48.374: INFO: Pod "pod-projected-configmaps-26ed3bab-dba8-40fa-b403-7386ac87a467": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018651174s Aug 11 00:29:50.378: INFO: Pod "pod-projected-configmaps-26ed3bab-dba8-40fa-b403-7386ac87a467": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022859816s STEP: Saw pod success Aug 11 00:29:50.378: INFO: Pod "pod-projected-configmaps-26ed3bab-dba8-40fa-b403-7386ac87a467" satisfied condition "Succeeded or Failed" Aug 11 00:29:50.382: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-26ed3bab-dba8-40fa-b403-7386ac87a467 container projected-configmap-volume-test: STEP: delete the pod Aug 11 00:29:50.480: INFO: Waiting for pod pod-projected-configmaps-26ed3bab-dba8-40fa-b403-7386ac87a467 to disappear Aug 11 00:29:50.487: INFO: Pod pod-projected-configmaps-26ed3bab-dba8-40fa-b403-7386ac87a467 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:29:50.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8735" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":214,"skipped":3561,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:29:50.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:29:50.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9777" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":303,"completed":215,"skipped":3616,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:29:50.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 11 00:29:50.958: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a173266-f4a3-4b6c-a8cd-6f84961ec18d" in namespace "downward-api-5174" to be "Succeeded or Failed" Aug 11 00:29:50.983: INFO: Pod "downwardapi-volume-6a173266-f4a3-4b6c-a8cd-6f84961ec18d": Phase="Pending", Reason="", readiness=false. Elapsed: 24.833017ms Aug 11 00:29:53.015: INFO: Pod "downwardapi-volume-6a173266-f4a3-4b6c-a8cd-6f84961ec18d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057233079s Aug 11 00:29:55.019: INFO: Pod "downwardapi-volume-6a173266-f4a3-4b6c-a8cd-6f84961ec18d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0614161s STEP: Saw pod success Aug 11 00:29:55.019: INFO: Pod "downwardapi-volume-6a173266-f4a3-4b6c-a8cd-6f84961ec18d" satisfied condition "Succeeded or Failed" Aug 11 00:29:55.048: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-6a173266-f4a3-4b6c-a8cd-6f84961ec18d container client-container: STEP: delete the pod Aug 11 00:29:55.092: INFO: Waiting for pod downwardapi-volume-6a173266-f4a3-4b6c-a8cd-6f84961ec18d to disappear Aug 11 00:29:55.108: INFO: Pod downwardapi-volume-6a173266-f4a3-4b6c-a8cd-6f84961ec18d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:29:55.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5174" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":216,"skipped":3638,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:29:55.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Aug 11 00:29:55.204: INFO: Waiting up to 5m0s for pod "var-expansion-8398588c-4bab-4cc8-9bd3-adf7cab25f8e" in namespace "var-expansion-8113" to be "Succeeded or Failed" Aug 11 00:29:55.223: INFO: Pod "var-expansion-8398588c-4bab-4cc8-9bd3-adf7cab25f8e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.555291ms Aug 11 00:29:57.390: INFO: Pod "var-expansion-8398588c-4bab-4cc8-9bd3-adf7cab25f8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185410669s Aug 11 00:29:59.397: INFO: Pod "var-expansion-8398588c-4bab-4cc8-9bd3-adf7cab25f8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.192743711s STEP: Saw pod success Aug 11 00:29:59.397: INFO: Pod "var-expansion-8398588c-4bab-4cc8-9bd3-adf7cab25f8e" satisfied condition "Succeeded or Failed" Aug 11 00:29:59.399: INFO: Trying to get logs from node latest-worker2 pod var-expansion-8398588c-4bab-4cc8-9bd3-adf7cab25f8e container dapi-container: STEP: delete the pod Aug 11 00:29:59.480: INFO: Waiting for pod var-expansion-8398588c-4bab-4cc8-9bd3-adf7cab25f8e to disappear Aug 11 00:29:59.491: INFO: Pod var-expansion-8398588c-4bab-4cc8-9bd3-adf7cab25f8e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:29:59.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8113" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":303,"completed":217,"skipped":3646,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:29:59.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 11 00:29:59.653: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8045f9dc-508d-4988-a9c2-bba2c5acffe1" in namespace "downward-api-5250" to be "Succeeded or Failed" Aug 11 00:29:59.697: INFO: Pod "downwardapi-volume-8045f9dc-508d-4988-a9c2-bba2c5acffe1": Phase="Pending", Reason="", readiness=false. Elapsed: 43.997591ms Aug 11 00:30:01.702: INFO: Pod "downwardapi-volume-8045f9dc-508d-4988-a9c2-bba2c5acffe1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048368009s Aug 11 00:30:03.706: INFO: Pod "downwardapi-volume-8045f9dc-508d-4988-a9c2-bba2c5acffe1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053043408s STEP: Saw pod success Aug 11 00:30:03.706: INFO: Pod "downwardapi-volume-8045f9dc-508d-4988-a9c2-bba2c5acffe1" satisfied condition "Succeeded or Failed" Aug 11 00:30:03.710: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8045f9dc-508d-4988-a9c2-bba2c5acffe1 container client-container: STEP: delete the pod Aug 11 00:30:03.741: INFO: Waiting for pod downwardapi-volume-8045f9dc-508d-4988-a9c2-bba2c5acffe1 to disappear Aug 11 00:30:03.752: INFO: Pod downwardapi-volume-8045f9dc-508d-4988-a9c2-bba2c5acffe1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:30:03.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5250" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":218,"skipped":3656,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:30:03.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2718 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Aug 11 00:30:03.883: INFO: Found 0 stateful pods, waiting for 3 Aug 11 00:30:13.895: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 11 00:30:13.895: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 11 00:30:13.895: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 11 00:30:23.888: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 11 00:30:23.889: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 11 00:30:23.889: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Aug 11 00:30:23.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2718 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 11 00:30:27.087: INFO: stderr: "I0811 00:30:26.959296 2648 log.go:181] (0xc00003ba20) (0xc00091f180) Create stream\nI0811 00:30:26.959354 2648 log.go:181] (0xc00003ba20) (0xc00091f180) Stream added, broadcasting: 1\nI0811 00:30:26.961151 2648 log.go:181] (0xc00003ba20) Reply frame received for 1\nI0811 00:30:26.961214 2648 log.go:181] (0xc00003ba20) (0xc00081c8c0) Create stream\nI0811 00:30:26.961230 2648 log.go:181] (0xc00003ba20) (0xc00081c8c0) Stream added, broadcasting: 3\nI0811 00:30:26.962225 2648 log.go:181] (0xc00003ba20) Reply frame received for 3\nI0811 00:30:26.962271 2648 log.go:181] (0xc00003ba20) (0xc00091f900) Create stream\nI0811 00:30:26.962290 2648 log.go:181] (0xc00003ba20) (0xc00091f900) Stream added, broadcasting: 5\nI0811 00:30:26.963239 2648 log.go:181] (0xc00003ba20) Reply frame received for 5\nI0811 00:30:27.046040 2648 log.go:181] (0xc00003ba20) Data frame received for 5\nI0811 00:30:27.046085 2648 log.go:181] (0xc00091f900) (5) Data frame handling\nI0811 00:30:27.046122 2648 log.go:181] (0xc00091f900) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0811 00:30:27.076835 2648 log.go:181] (0xc00003ba20) Data frame received for 3\nI0811 00:30:27.076859 2648 log.go:181] (0xc00081c8c0) (3) Data frame handling\nI0811 00:30:27.076880 2648 log.go:181] (0xc00003ba20) Data frame received for 5\nI0811 00:30:27.076915 2648 log.go:181] (0xc00091f900) (5) Data frame handling\nI0811 00:30:27.076952 2648 log.go:181] (0xc00081c8c0) (3) Data frame sent\nI0811 00:30:27.076976 2648 log.go:181] (0xc00003ba20) Data frame received for 3\nI0811 00:30:27.076998 2648 log.go:181] (0xc00081c8c0) (3) Data frame handling\nI0811 00:30:27.079367 2648 log.go:181] (0xc00003ba20) Data frame received for 1\nI0811 00:30:27.079387 2648 log.go:181] (0xc00091f180) (1) Data frame handling\nI0811 00:30:27.079401 2648 log.go:181] (0xc00091f180) (1) Data frame sent\nI0811 00:30:27.079533 2648 log.go:181] (0xc00003ba20) (0xc00091f180) Stream removed, broadcasting: 1\nI0811 00:30:27.079681 2648 log.go:181] (0xc00003ba20) Go away received\nI0811 00:30:27.080083 2648 log.go:181] (0xc00003ba20) (0xc00091f180) Stream removed, broadcasting: 1\nI0811 00:30:27.080123 2648 log.go:181] (0xc00003ba20) (0xc00081c8c0) Stream removed, broadcasting: 3\nI0811 00:30:27.080145 2648 log.go:181] (0xc00003ba20) (0xc00091f900) Stream removed, broadcasting: 5\n" Aug 11 00:30:27.087: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 11 00:30:27.087: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 11 00:30:37.134: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Aug 11 00:30:47.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2718 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 11 00:30:47.411: INFO: stderr: "I0811 00:30:47.297797 2667 log.go:181] (0xc000c2a000) (0xc000b7b2c0) Create stream\nI0811 00:30:47.297852 2667 log.go:181] (0xc000c2a000) (0xc000b7b2c0) Stream added, broadcasting: 1\nI0811 00:30:47.302464 2667 log.go:181] (0xc000c2a000) Reply frame received for 1\nI0811 00:30:47.302524 2667 log.go:181] (0xc000c2a000) (0xc000b72aa0) Create stream\nI0811 00:30:47.302550 2667 log.go:181] (0xc000c2a000) (0xc000b72aa0) Stream added, broadcasting: 3\nI0811 00:30:47.304288 2667 log.go:181] (0xc000c2a000) Reply frame received for 3\nI0811 00:30:47.304323 2667 log.go:181] (0xc000c2a000) (0xc000b73400) Create stream\nI0811 00:30:47.304332 2667 log.go:181] (0xc000c2a000) (0xc000b73400) Stream added, broadcasting: 5\nI0811 00:30:47.305397 2667 log.go:181] (0xc000c2a000) Reply frame received for 5\nI0811 00:30:47.402343 2667 log.go:181] (0xc000c2a000) Data frame received for 5\nI0811 00:30:47.402380 2667 log.go:181] (0xc000b73400) (5) Data frame handling\nI0811 00:30:47.402394 2667 log.go:181] (0xc000b73400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0811 00:30:47.402413 2667 log.go:181] (0xc000c2a000) Data frame received for 3\nI0811 00:30:47.402423 2667 log.go:181] (0xc000b72aa0) (3) Data frame handling\nI0811 00:30:47.402433 2667 log.go:181] (0xc000b72aa0) (3) Data frame sent\nI0811 00:30:47.402443 2667 log.go:181] (0xc000c2a000) Data frame received for 3\nI0811 00:30:47.402451 2667 log.go:181] (0xc000b72aa0) (3) Data frame handling\nI0811 00:30:47.402520 2667 log.go:181] (0xc000c2a000) Data frame received for 5\nI0811 00:30:47.402551 2667 log.go:181] (0xc000b73400) (5) Data frame handling\nI0811 00:30:47.404329 2667 log.go:181] (0xc000c2a000) Data frame received for 1\nI0811 00:30:47.404353 2667 log.go:181] (0xc000b7b2c0) (1) Data frame handling\nI0811 00:30:47.404392 2667 log.go:181] (0xc000b7b2c0) (1) Data frame sent\nI0811 00:30:47.404411 2667 log.go:181] (0xc000c2a000) (0xc000b7b2c0) Stream removed, broadcasting: 1\nI0811 00:30:47.404428 2667 log.go:181] (0xc000c2a000) Go away received\nI0811 00:30:47.404977 2667 log.go:181] (0xc000c2a000) (0xc000b7b2c0) Stream removed, broadcasting: 1\nI0811 00:30:47.405005 2667 log.go:181] (0xc000c2a000) (0xc000b72aa0) Stream removed, broadcasting: 3\nI0811 00:30:47.405016 2667 log.go:181] (0xc000c2a000) (0xc000b73400) Stream removed, broadcasting: 5\n" Aug 11 00:30:47.411: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 11 00:30:47.411: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 11 00:30:57.434: INFO: Waiting for StatefulSet statefulset-2718/ss2 to complete update Aug 11 00:30:57.434: INFO: Waiting for Pod statefulset-2718/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 11 00:30:57.434: INFO: Waiting for Pod statefulset-2718/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 11 00:31:07.445: INFO: Waiting for StatefulSet statefulset-2718/ss2 to complete update Aug 11 00:31:07.445: INFO: Waiting for Pod statefulset-2718/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 11 00:31:17.442: INFO: Waiting for StatefulSet statefulset-2718/ss2 to complete update STEP: Rolling back to a previous revision Aug 11 00:31:27.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2718 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 11 00:31:27.747: INFO: stderr: "I0811 00:31:27.588974 2685 log.go:181] (0xc000f93290) (0xc000aafd60) Create stream\nI0811 00:31:27.589038 2685 log.go:181] (0xc000f93290) (0xc000aafd60) Stream added, broadcasting: 1\nI0811 00:31:27.591745 2685 log.go:181] (0xc000f93290) Reply frame received for 1\nI0811 00:31:27.591784 2685 log.go:181] (0xc000f93290) (0xc000ab30e0) Create stream\nI0811 00:31:27.591797 2685 log.go:181] (0xc000f93290) (0xc000ab30e0) Stream added, broadcasting: 3\nI0811 00:31:27.592639 2685 log.go:181] (0xc000f93290) Reply frame received for 3\nI0811 00:31:27.592699 2685 log.go:181] (0xc000f93290) (0xc000acb2c0) Create stream\nI0811 00:31:27.592818 2685 log.go:181] (0xc000f93290) (0xc000acb2c0) Stream added, broadcasting: 5\nI0811 00:31:27.593779 2685 log.go:181] (0xc000f93290) Reply frame received for 5\nI0811 00:31:27.686125 2685 log.go:181] (0xc000f93290) Data frame received for 5\nI0811 00:31:27.686172 2685 log.go:181] (0xc000acb2c0) (5) Data frame handling\nI0811 00:31:27.686195 2685 log.go:181] (0xc000acb2c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0811 00:31:27.737404 2685 log.go:181] (0xc000f93290) Data frame received for 3\nI0811 00:31:27.737458 2685 log.go:181] (0xc000ab30e0) (3) Data frame handling\nI0811 00:31:27.737493 2685 log.go:181] (0xc000ab30e0) (3) Data frame sent\nI0811 00:31:27.737700 2685 log.go:181] (0xc000f93290) Data frame received for 5\nI0811 00:31:27.737747 2685 log.go:181] (0xc000acb2c0) (5) Data frame handling\nI0811 00:31:27.737873 2685 log.go:181] (0xc000f93290) Data frame received for 3\nI0811 00:31:27.737902 2685 log.go:181] (0xc000ab30e0) (3) Data frame handling\nI0811 00:31:27.740379 2685 log.go:181] (0xc000f93290) Data frame received for 1\nI0811 00:31:27.740412 2685 log.go:181] (0xc000aafd60) (1) Data frame handling\nI0811 00:31:27.740437 2685 log.go:181] (0xc000aafd60) (1) Data frame sent\nI0811 00:31:27.740460 2685 log.go:181] (0xc000f93290) (0xc000aafd60) Stream removed, broadcasting: 1\nI0811 00:31:27.740478 2685 log.go:181] (0xc000f93290) Go away received\nI0811 00:31:27.741729 2685 log.go:181] (0xc000f93290) (0xc000aafd60) Stream removed, broadcasting: 1\nI0811 00:31:27.741794 2685 log.go:181] (0xc000f93290) (0xc000ab30e0) Stream removed, broadcasting: 3\nI0811 00:31:27.741820 2685 log.go:181] (0xc000f93290) (0xc000acb2c0) Stream removed, broadcasting: 5\n" Aug 11 00:31:27.747: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 11 00:31:27.747: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 11 00:31:37.798: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Aug 11 00:31:47.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2718 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 11 00:31:48.057: INFO: stderr: "I0811 00:31:47.952654 2703 log.go:181] (0xc0009a71e0) (0xc000c7fcc0) Create stream\nI0811 00:31:47.952706 2703 log.go:181] (0xc0009a71e0) (0xc000c7fcc0) Stream added, broadcasting: 1\nI0811 00:31:47.958227 2703 log.go:181] (0xc0009a71e0) Reply frame received for 1\nI0811 00:31:47.958261 2703 log.go:181] (0xc0009a71e0) (0xc00033e280) Create stream\nI0811 00:31:47.958271 2703 log.go:181] (0xc0009a71e0) (0xc00033e280) Stream added, broadcasting: 3\nI0811 00:31:47.959305 2703 log.go:181] (0xc0009a71e0) Reply frame received for 3\nI0811 00:31:47.959339 2703 log.go:181] (0xc0009a71e0) (0xc0003c4460) Create stream\nI0811 00:31:47.959357 2703 log.go:181] (0xc0009a71e0) (0xc0003c4460) Stream added, broadcasting: 5\nI0811 00:31:47.960405 2703 log.go:181] (0xc0009a71e0) Reply frame received for 5\nI0811 00:31:48.050673 2703 log.go:181] (0xc0009a71e0) Data frame received for 5\nI0811 00:31:48.050710 2703 log.go:181] (0xc0003c4460) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0811 00:31:48.050726 2703 log.go:181] (0xc0009a71e0) Data frame received for 3\nI0811 00:31:48.050758 2703 log.go:181] (0xc00033e280) (3) Data frame handling\nI0811 00:31:48.050786 2703 log.go:181] (0xc0003c4460) (5) Data frame sent\nI0811 00:31:48.050811 2703 log.go:181] (0xc0009a71e0) Data frame received for 5\nI0811 00:31:48.050831 2703 log.go:181] (0xc0003c4460) (5) Data frame handling\nI0811 00:31:48.050855 2703 log.go:181] (0xc00033e280) (3) Data frame sent\nI0811 00:31:48.050869 2703 log.go:181] (0xc0009a71e0) Data frame received for 3\nI0811 00:31:48.050874 2703 log.go:181] (0xc00033e280) (3) Data frame handling\nI0811 00:31:48.051953 2703 log.go:181] (0xc0009a71e0) Data frame received for 1\nI0811 00:31:48.051993 2703 log.go:181] (0xc000c7fcc0) (1) Data frame handling\nI0811 00:31:48.052009 2703 log.go:181] (0xc000c7fcc0) (1) Data frame sent\nI0811 00:31:48.052025 2703 log.go:181] (0xc0009a71e0) (0xc000c7fcc0) Stream removed, broadcasting: 1\nI0811 00:31:48.052058 2703 log.go:181] (0xc0009a71e0) Go away received\nI0811 00:31:48.052461 2703 log.go:181] (0xc0009a71e0) (0xc000c7fcc0) Stream removed, broadcasting: 1\nI0811 00:31:48.052478 2703 log.go:181] (0xc0009a71e0) (0xc00033e280) Stream removed, broadcasting: 3\nI0811 00:31:48.052487 2703 log.go:181] (0xc0009a71e0) (0xc0003c4460) Stream removed, broadcasting: 5\n" Aug 11 00:31:48.057: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 11 00:31:48.057: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 11 00:32:08.079: INFO: Waiting for StatefulSet statefulset-2718/ss2 to complete update Aug 11 00:32:08.079: INFO: Waiting for Pod statefulset-2718/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 11 00:32:18.088: INFO: Deleting all statefulset in ns statefulset-2718 Aug 11 00:32:18.091: INFO: Scaling statefulset ss2 to 0 Aug 11 00:32:48.169: INFO: Waiting for statefulset status.replicas updated to 0 Aug 11 00:32:48.172: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:32:48.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2718" for this suite. • [SLOW TEST:164.422 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":303,"completed":219,"skipped":3669,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:32:48.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-chhj STEP: Creating a pod to test atomic-volume-subpath Aug 11 00:32:48.305: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-chhj" in namespace "subpath-6936" to be "Succeeded or Failed" Aug 11 00:32:48.309: INFO: Pod "pod-subpath-test-configmap-chhj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.357772ms Aug 11 00:32:50.313: INFO: Pod "pod-subpath-test-configmap-chhj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007979173s Aug 11 00:32:52.316: INFO: Pod "pod-subpath-test-configmap-chhj": Phase="Running", Reason="", readiness=true. Elapsed: 4.011758838s Aug 11 00:32:54.344: INFO: Pod "pod-subpath-test-configmap-chhj": Phase="Running", Reason="", readiness=true. Elapsed: 6.039692806s Aug 11 00:32:56.348: INFO: Pod "pod-subpath-test-configmap-chhj": Phase="Running", Reason="", readiness=true. Elapsed: 8.043342985s Aug 11 00:32:58.358: INFO: Pod "pod-subpath-test-configmap-chhj": Phase="Running", Reason="", readiness=true. Elapsed: 10.053005245s Aug 11 00:33:00.361: INFO: Pod "pod-subpath-test-configmap-chhj": Phase="Running", Reason="", readiness=true. Elapsed: 12.056417829s Aug 11 00:33:02.365: INFO: Pod "pod-subpath-test-configmap-chhj": Phase="Running", Reason="", readiness=true. Elapsed: 14.060160265s Aug 11 00:33:04.369: INFO: Pod "pod-subpath-test-configmap-chhj": Phase="Running", Reason="", readiness=true. Elapsed: 16.064355769s Aug 11 00:33:06.373: INFO: Pod "pod-subpath-test-configmap-chhj": Phase="Running", Reason="", readiness=true. Elapsed: 18.068810221s Aug 11 00:33:08.377: INFO: Pod "pod-subpath-test-configmap-chhj": Phase="Running", Reason="", readiness=true. Elapsed: 20.072209705s Aug 11 00:33:10.381: INFO: Pod "pod-subpath-test-configmap-chhj": Phase="Running", Reason="", readiness=true. Elapsed: 22.07618777s Aug 11 00:33:12.385: INFO: Pod "pod-subpath-test-configmap-chhj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.080449457s STEP: Saw pod success Aug 11 00:33:12.385: INFO: Pod "pod-subpath-test-configmap-chhj" satisfied condition "Succeeded or Failed" Aug 11 00:33:12.388: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-chhj container test-container-subpath-configmap-chhj: STEP: delete the pod Aug 11 00:33:12.440: INFO: Waiting for pod pod-subpath-test-configmap-chhj to disappear Aug 11 00:33:12.448: INFO: Pod pod-subpath-test-configmap-chhj no longer exists STEP: Deleting pod pod-subpath-test-configmap-chhj Aug 11 00:33:12.448: INFO: Deleting pod "pod-subpath-test-configmap-chhj" in namespace "subpath-6936" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:33:12.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6936" for this suite. • [SLOW TEST:24.255 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":303,"completed":220,"skipped":3670,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:33:12.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:33:12.586: INFO: Pod name rollover-pod: Found 0 pods out of 1 Aug 11 00:33:17.598: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 11 00:33:17.598: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Aug 11 00:33:19.602: INFO: Creating deployment "test-rollover-deployment" Aug 11 00:33:19.627: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Aug 11 00:33:21.634: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Aug 11 00:33:21.641: INFO: Ensure that both replica sets have 1 created replica Aug 11 00:33:21.646: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Aug 11 00:33:21.655: INFO: Updating deployment test-rollover-deployment Aug 11 00:33:21.655: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Aug 11 00:33:23.695: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Aug 11 00:33:23.702: INFO: Make sure deployment "test-rollover-deployment" is complete Aug 11 00:33:23.707: INFO: all replica sets need to contain the pod-template-hash label Aug 11 00:33:23.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702799, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702799, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702802, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702799, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 11 00:33:25.717: INFO: all replica sets need to contain the pod-template-hash label Aug 11 00:33:25.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702799, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702799, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702804, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702799, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 11 00:33:27.716: INFO: all replica sets need to contain the pod-template-hash label Aug 11 00:33:27.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702799, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702799, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702804, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702799, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 11 00:33:29.714: INFO: all replica sets need to contain the pod-template-hash label Aug 11 00:33:29.714: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702799, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702799, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702804, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702799, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 11 00:33:31.715: INFO: all replica sets need to contain the pod-template-hash label Aug 11 00:33:31.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702799, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702799, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702804, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702799, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 11 00:33:33.716: INFO: all replica sets need to contain the pod-template-hash label Aug 11 00:33:33.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702799, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702799, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702804, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732702799, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 11 00:33:35.714: INFO: Aug 11 00:33:35.714: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 11 00:33:35.720: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9319 /apis/apps/v1/namespaces/deployment-9319/deployments/test-rollover-deployment 4da2754f-f708-4d7b-b064-8b36003f909d 6057870 2 2020-08-11 00:33:19 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-11 00:33:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-11 00:33:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0052bd598 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-11 00:33:19 +0000 UTC,LastTransitionTime:2020-08-11 00:33:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2020-08-11 00:33:35 +0000 UTC,LastTransitionTime:2020-08-11 00:33:19 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 11 00:33:35.723: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-9319 /apis/apps/v1/namespaces/deployment-9319/replicasets/test-rollover-deployment-5797c7764 3b263523-9def-43c7-8300-8347574ddcfa 6057859 2 2020-08-11 00:33:21 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 4da2754f-f708-4d7b-b064-8b36003f909d 0xc0052bdae0 0xc0052bdae1}] [] [{kube-controller-manager Update apps/v1 2020-08-11 00:33:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4da2754f-f708-4d7b-b064-8b36003f909d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0052bdb68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 11 00:33:35.723: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Aug 11 00:33:35.723: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9319 /apis/apps/v1/namespaces/deployment-9319/replicasets/test-rollover-controller 6bbdcd77-bf6e-4a54-91eb-fc44275799bc 6057869 2 2020-08-11 00:33:12 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 4da2754f-f708-4d7b-b064-8b36003f909d 0xc0052bd9d7 0xc0052bd9d8}] [] [{e2e.test Update apps/v1 2020-08-11 00:33:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-11 00:33:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4da2754f-f708-4d7b-b064-8b36003f909d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0052bda78 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 11 00:33:35.723: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-9319 /apis/apps/v1/namespaces/deployment-9319/replicasets/test-rollover-deployment-78bc8b888c 0eeb6969-18e2-48f9-b290-6b7c2a12eae0 6057811 2 2020-08-11 00:33:19 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 4da2754f-f708-4d7b-b064-8b36003f909d 0xc0052bdbd7 0xc0052bdbd8}] [] [{kube-controller-manager Update apps/v1 2020-08-11 00:33:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4da2754f-f708-4d7b-b064-8b36003f909d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0052bdc68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 11 00:33:35.726: INFO: Pod "test-rollover-deployment-5797c7764-2grcz" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-2grcz test-rollover-deployment-5797c7764- deployment-9319 /api/v1/namespaces/deployment-9319/pods/test-rollover-deployment-5797c7764-2grcz f72a36a5-b4a3-4fc5-a11f-7ecb1e84965f 6057827 0 2020-08-11 00:33:21 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 3b263523-9def-43c7-8300-8347574ddcfa 0xc004470250 0xc004470251}] [] [{kube-controller-manager Update v1 2020-08-11 00:33:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3b263523-9def-43c7-8300-8347574ddcfa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:33:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.160\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9xs8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9xs8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9xs8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:33:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:33:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:33:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:33:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.160,StartTime:2020-08-11 00:33:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 00:33:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://1cddefd8913aafe86eaaae16f8949e0ca94c3f07cf172b6dd4c4f5a4f6c801a6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.160,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:33:35.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9319" for this suite. • [SLOW TEST:23.273 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":303,"completed":221,"skipped":3677,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:33:35.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-7b8c062f-7ae4-44ad-9414-a3f7e70c83ec in namespace container-probe-3435 Aug 11 00:33:40.046: INFO: Started pod busybox-7b8c062f-7ae4-44ad-9414-a3f7e70c83ec in namespace container-probe-3435 STEP: checking the pod's current state and verifying that restartCount is present Aug 11 00:33:40.049: INFO: Initial restart count of pod busybox-7b8c062f-7ae4-44ad-9414-a3f7e70c83ec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:37:40.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3435" for this suite. • [SLOW TEST:245.060 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":222,"skipped":3684,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:37:40.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Aug 11 00:37:40.931: INFO: starting watch STEP: patching STEP: updating Aug 11 00:37:40.958: INFO: waiting for watch events with expected annotations Aug 11 00:37:40.958: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:37:41.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-4953" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":303,"completed":223,"skipped":3712,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:37:41.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0811 00:37:42.303579 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 11 00:38:44.535: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:38:44.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6405" for this suite. • [SLOW TEST:63.532 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":303,"completed":224,"skipped":3724,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:38:44.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 11 00:38:44.603: INFO: Waiting up to 5m0s for pod "downwardapi-volume-afcbdc4d-976b-4af5-bd36-f0b46b1b4dd6" in namespace "projected-7270" to be "Succeeded or Failed" Aug 11 00:38:44.606: INFO: Pod "downwardapi-volume-afcbdc4d-976b-4af5-bd36-f0b46b1b4dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.434148ms Aug 11 00:38:46.648: INFO: Pod "downwardapi-volume-afcbdc4d-976b-4af5-bd36-f0b46b1b4dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04535308s Aug 11 00:38:48.787: INFO: Pod "downwardapi-volume-afcbdc4d-976b-4af5-bd36-f0b46b1b4dd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.184255561s STEP: Saw pod success Aug 11 00:38:48.787: INFO: Pod "downwardapi-volume-afcbdc4d-976b-4af5-bd36-f0b46b1b4dd6" satisfied condition "Succeeded or Failed" Aug 11 00:38:48.791: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-afcbdc4d-976b-4af5-bd36-f0b46b1b4dd6 container client-container: STEP: delete the pod Aug 11 00:38:48.848: INFO: Waiting for pod downwardapi-volume-afcbdc4d-976b-4af5-bd36-f0b46b1b4dd6 to disappear Aug 11 00:38:48.863: INFO: Pod downwardapi-volume-afcbdc4d-976b-4af5-bd36-f0b46b1b4dd6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:38:48.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7270" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":225,"skipped":3743,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:38:48.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:38:49.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5998" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":303,"completed":226,"skipped":3755,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:38:49.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-fb191d3f-4740-4150-bf2f-f04345c09039 in namespace container-probe-8003 Aug 11 00:38:53.142: INFO: Started pod test-webserver-fb191d3f-4740-4150-bf2f-f04345c09039 in namespace container-probe-8003 STEP: checking the pod's current state and verifying that restartCount is present Aug 11 00:38:53.146: INFO: Initial restart count of pod test-webserver-fb191d3f-4740-4150-bf2f-f04345c09039 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:42:53.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8003" for this suite. • [SLOW TEST:244.720 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":227,"skipped":3770,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:42:53.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0811 00:42:55.397872 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 11 00:43:57.424: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:43:57.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6556" for this suite. • [SLOW TEST:63.646 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":303,"completed":228,"skipped":3787,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:43:57.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-6101c570-ab14-4a5d-af6e-8a4d54719f6e STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-6101c570-ab14-4a5d-af6e-8a4d54719f6e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:45:28.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9218" for this suite. • [SLOW TEST:90.577 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":229,"skipped":3791,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:45:28.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:45:28.077: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 11 00:45:31.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-459 create -f -' Aug 11 00:45:35.253: INFO: stderr: "" Aug 11 00:45:35.253: INFO: stdout: "e2e-test-crd-publish-openapi-7335-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 11 00:45:35.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-459 delete e2e-test-crd-publish-openapi-7335-crds test-cr' Aug 11 00:45:35.366: INFO: stderr: "" Aug 11 00:45:35.366: INFO: stdout: "e2e-test-crd-publish-openapi-7335-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Aug 11 00:45:35.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-459 apply -f -' Aug 11 00:45:35.647: INFO: stderr: "" Aug 11 00:45:35.647: INFO: stdout: "e2e-test-crd-publish-openapi-7335-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 11 00:45:35.648: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-459 delete e2e-test-crd-publish-openapi-7335-crds test-cr' Aug 11 00:45:35.762: INFO: stderr: "" Aug 11 00:45:35.762: INFO: stdout: "e2e-test-crd-publish-openapi-7335-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 11 00:45:35.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7335-crds' Aug 11 00:45:36.021: INFO: stderr: "" Aug 11 00:45:36.021: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7335-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:45:39.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-459" for this suite. • [SLOW TEST:11.002 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":303,"completed":230,"skipped":3803,"failed":0} [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:45:39.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:45:43.166: INFO: Waiting up to 5m0s for pod "client-envvars-8ec29674-aa0f-4d00-827f-ccc13d40c974" in namespace "pods-3610" to be "Succeeded or Failed" Aug 11 00:45:43.184: INFO: Pod "client-envvars-8ec29674-aa0f-4d00-827f-ccc13d40c974": Phase="Pending", Reason="", readiness=false. Elapsed: 18.730509ms Aug 11 00:45:45.189: INFO: Pod "client-envvars-8ec29674-aa0f-4d00-827f-ccc13d40c974": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023067829s Aug 11 00:45:47.192: INFO: Pod "client-envvars-8ec29674-aa0f-4d00-827f-ccc13d40c974": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026752747s STEP: Saw pod success Aug 11 00:45:47.192: INFO: Pod "client-envvars-8ec29674-aa0f-4d00-827f-ccc13d40c974" satisfied condition "Succeeded or Failed" Aug 11 00:45:47.195: INFO: Trying to get logs from node latest-worker2 pod client-envvars-8ec29674-aa0f-4d00-827f-ccc13d40c974 container env3cont: STEP: delete the pod Aug 11 00:45:47.216: INFO: Waiting for pod client-envvars-8ec29674-aa0f-4d00-827f-ccc13d40c974 to disappear Aug 11 00:45:47.231: INFO: Pod client-envvars-8ec29674-aa0f-4d00-827f-ccc13d40c974 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:45:47.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3610" for this suite. • [SLOW TEST:8.224 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":303,"completed":231,"skipped":3803,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:45:47.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 11 00:45:48.185: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 11 00:45:50.194: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703548, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703548, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703548, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703548, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 11 00:45:53.242: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:46:03.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7824" for this suite. STEP: Destroying namespace "webhook-7824-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.263 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":303,"completed":232,"skipped":3804,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:46:03.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7406 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7406 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7406 Aug 11 00:46:03.675: INFO: Found 0 stateful pods, waiting for 1 Aug 11 00:46:13.680: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Aug 11 00:46:13.684: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7406 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 11 00:46:13.936: INFO: stderr: "I0811 00:46:13.829581 2811 log.go:181] (0xc000826f20) (0xc0007ffae0) Create stream\nI0811 00:46:13.829643 2811 log.go:181] (0xc000826f20) (0xc0007ffae0) Stream added, broadcasting: 1\nI0811 00:46:13.832046 2811 log.go:181] (0xc000826f20) Reply frame received for 1\nI0811 00:46:13.832088 2811 log.go:181] (0xc000826f20) (0xc0000230e0) Create stream\nI0811 00:46:13.832102 2811 log.go:181] (0xc000826f20) (0xc0000230e0) Stream added, broadcasting: 3\nI0811 00:46:13.833209 2811 log.go:181] (0xc000826f20) Reply frame received for 3\nI0811 00:46:13.833249 2811 log.go:181] (0xc000826f20) (0xc000023900) Create stream\nI0811 00:46:13.833263 2811 log.go:181] (0xc000826f20) (0xc000023900) Stream added, broadcasting: 5\nI0811 00:46:13.834138 2811 log.go:181] (0xc000826f20) Reply frame received for 5\nI0811 00:46:13.898426 2811 log.go:181] (0xc000826f20) Data frame received for 5\nI0811 00:46:13.898465 2811 log.go:181] (0xc000023900) (5) Data frame handling\nI0811 00:46:13.898483 2811 log.go:181] (0xc000023900) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0811 00:46:13.930066 2811 log.go:181] (0xc000826f20) Data frame received for 3\nI0811 00:46:13.930111 2811 log.go:181] (0xc0000230e0) (3) Data frame handling\nI0811 00:46:13.930125 2811 log.go:181] (0xc0000230e0) (3) Data frame sent\nI0811 00:46:13.930137 2811 log.go:181] (0xc000826f20) Data frame received for 3\nI0811 00:46:13.930145 2811 log.go:181] (0xc0000230e0) (3) Data frame handling\nI0811 00:46:13.930174 2811 log.go:181] (0xc000826f20) Data frame received for 5\nI0811 00:46:13.930183 2811 log.go:181] (0xc000023900) (5) Data frame handling\nI0811 00:46:13.931815 2811 log.go:181] (0xc000826f20) Data frame received for 1\nI0811 00:46:13.931894 2811 log.go:181] (0xc0007ffae0) (1) Data frame handling\nI0811 00:46:13.931920 2811 log.go:181] (0xc0007ffae0) (1) Data frame sent\nI0811 00:46:13.931938 2811 log.go:181] (0xc000826f20) (0xc0007ffae0) Stream removed, broadcasting: 1\nI0811 00:46:13.931957 2811 log.go:181] (0xc000826f20) Go away received\nI0811 00:46:13.932393 2811 log.go:181] (0xc000826f20) (0xc0007ffae0) Stream removed, broadcasting: 1\nI0811 00:46:13.932412 2811 log.go:181] (0xc000826f20) (0xc0000230e0) Stream removed, broadcasting: 3\nI0811 00:46:13.932419 2811 log.go:181] (0xc000826f20) (0xc000023900) Stream removed, broadcasting: 5\n" Aug 11 00:46:13.936: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 11 00:46:13.936: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 11 00:46:13.940: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 11 00:46:23.944: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 11 00:46:23.944: INFO: Waiting for statefulset status.replicas updated to 0 Aug 11 00:46:23.957: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999564s Aug 11 00:46:24.962: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995202473s Aug 11 00:46:25.967: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990293651s Aug 11 00:46:26.971: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.98607521s Aug 11 00:46:27.975: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.98190765s Aug 11 00:46:28.981: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.977178249s Aug 11 00:46:29.985: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.971890394s Aug 11 00:46:30.989: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.967841932s Aug 11 00:46:31.993: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.963765774s Aug 11 00:46:32.997: INFO: Verifying statefulset ss doesn't scale past 1 for another 959.904939ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7406 Aug 11 00:46:34.021: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7406 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 11 00:46:34.236: INFO: stderr: "I0811 00:46:34.151096 2829 log.go:181] (0xc000a65080) (0xc000be3ae0) Create stream\nI0811 00:46:34.151148 2829 log.go:181] (0xc000a65080) (0xc000be3ae0) Stream added, broadcasting: 1\nI0811 00:46:34.162486 2829 log.go:181] (0xc000a65080) Reply frame received for 1\nI0811 00:46:34.162539 2829 log.go:181] (0xc000a65080) (0xc000a2b180) Create stream\nI0811 00:46:34.162551 2829 log.go:181] (0xc000a65080) (0xc000a2b180) Stream added, broadcasting: 3\nI0811 00:46:34.163323 2829 log.go:181] (0xc000a65080) Reply frame received for 3\nI0811 00:46:34.163353 2829 log.go:181] (0xc000a65080) (0xc0004c2640) Create stream\nI0811 00:46:34.163362 2829 log.go:181] (0xc000a65080) (0xc0004c2640) Stream added, broadcasting: 5\nI0811 00:46:34.164143 2829 log.go:181] (0xc000a65080) Reply frame received for 5\nI0811 00:46:34.228045 2829 log.go:181] (0xc000a65080) Data frame received for 5\nI0811 00:46:34.228106 2829 log.go:181] (0xc0004c2640) (5) Data frame handling\nI0811 00:46:34.228134 2829 log.go:181] (0xc0004c2640) (5) Data frame sent\nI0811 00:46:34.228152 2829 log.go:181] (0xc000a65080) Data frame received for 5\nI0811 00:46:34.228170 2829 log.go:181] (0xc0004c2640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0811 00:46:34.228207 2829 log.go:181] (0xc000a65080) Data frame received for 3\nI0811 00:46:34.228248 2829 log.go:181] (0xc000a2b180) (3) Data frame handling\nI0811 00:46:34.228285 2829 log.go:181] (0xc000a2b180) (3) Data frame sent\nI0811 00:46:34.228310 2829 log.go:181] (0xc000a65080) Data frame received for 3\nI0811 00:46:34.228334 2829 log.go:181] (0xc000a2b180) (3) Data frame handling\nI0811 00:46:34.230043 2829 log.go:181] (0xc000a65080) Data frame received for 1\nI0811 00:46:34.230066 2829 log.go:181] (0xc000be3ae0) (1) Data frame handling\nI0811 00:46:34.230083 2829 log.go:181] (0xc000be3ae0) (1) Data frame sent\nI0811 00:46:34.230099 2829 log.go:181] (0xc000a65080) (0xc000be3ae0) Stream removed, broadcasting: 1\nI0811 00:46:34.230118 2829 log.go:181] (0xc000a65080) Go away received\nI0811 00:46:34.230461 2829 log.go:181] (0xc000a65080) (0xc000be3ae0) Stream removed, broadcasting: 1\nI0811 00:46:34.230474 2829 log.go:181] (0xc000a65080) (0xc000a2b180) Stream removed, broadcasting: 3\nI0811 00:46:34.230479 2829 log.go:181] (0xc000a65080) (0xc0004c2640) Stream removed, broadcasting: 5\n" Aug 11 00:46:34.236: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 11 00:46:34.236: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 11 00:46:34.240: INFO: Found 1 stateful pods, waiting for 3 Aug 11 00:46:44.246: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 11 00:46:44.246: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 11 00:46:44.246: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Aug 11 00:46:44.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7406 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 11 00:46:44.466: INFO: stderr: "I0811 00:46:44.388164 2847 log.go:181] (0xc0007cb1e0) (0xc000d926e0) Create stream\nI0811 00:46:44.388209 2847 log.go:181] (0xc0007cb1e0) (0xc000d926e0) Stream added, broadcasting: 1\nI0811 00:46:44.393406 2847 log.go:181] (0xc0007cb1e0) Reply frame received for 1\nI0811 00:46:44.393440 2847 log.go:181] (0xc0007cb1e0) (0xc00047e280) Create stream\nI0811 00:46:44.393449 2847 log.go:181] (0xc0007cb1e0) (0xc00047e280) Stream added, broadcasting: 3\nI0811 00:46:44.394288 2847 log.go:181] (0xc0007cb1e0) Reply frame received for 3\nI0811 00:46:44.394319 2847 log.go:181] (0xc0007cb1e0) (0xc0004525a0) Create stream\nI0811 00:46:44.394329 2847 log.go:181] (0xc0007cb1e0) (0xc0004525a0) Stream added, broadcasting: 5\nI0811 00:46:44.395145 2847 log.go:181] (0xc0007cb1e0) Reply frame received for 5\nI0811 00:46:44.458845 2847 log.go:181] (0xc0007cb1e0) Data frame received for 5\nI0811 00:46:44.458870 2847 log.go:181] (0xc0004525a0) (5) Data frame handling\nI0811 00:46:44.458878 2847 log.go:181] (0xc0004525a0) (5) Data frame sent\nI0811 00:46:44.458882 2847 log.go:181] (0xc0007cb1e0) Data frame received for 5\nI0811 00:46:44.458886 2847 log.go:181] (0xc0004525a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0811 00:46:44.458902 2847 log.go:181] (0xc0007cb1e0) Data frame received for 3\nI0811 00:46:44.458906 2847 log.go:181] (0xc00047e280) (3) Data frame handling\nI0811 00:46:44.458911 2847 log.go:181] (0xc00047e280) (3) Data frame sent\nI0811 00:46:44.458916 2847 log.go:181] (0xc0007cb1e0) Data frame received for 3\nI0811 00:46:44.458920 2847 log.go:181] (0xc00047e280) (3) Data frame handling\nI0811 00:46:44.460347 2847 log.go:181] (0xc0007cb1e0) Data frame received for 1\nI0811 00:46:44.460368 2847 log.go:181] (0xc000d926e0) (1) Data frame handling\nI0811 00:46:44.460380 2847 log.go:181] (0xc000d926e0) (1) Data frame sent\nI0811 00:46:44.460394 2847 log.go:181] (0xc0007cb1e0) (0xc000d926e0) Stream removed, broadcasting: 1\nI0811 00:46:44.460579 2847 log.go:181] (0xc0007cb1e0) Go away received\nI0811 00:46:44.460700 2847 log.go:181] (0xc0007cb1e0) (0xc000d926e0) Stream removed, broadcasting: 1\nI0811 00:46:44.460791 2847 log.go:181] (0xc0007cb1e0) (0xc00047e280) Stream removed, broadcasting: 3\nI0811 00:46:44.460811 2847 log.go:181] (0xc0007cb1e0) (0xc0004525a0) Stream removed, broadcasting: 5\n" Aug 11 00:46:44.466: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 11 00:46:44.466: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 11 00:46:44.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7406 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 11 00:46:44.756: INFO: stderr: "I0811 00:46:44.649487 2866 log.go:181] (0xc00081ba20) (0xc0008a0820) Create stream\nI0811 00:46:44.649545 2866 log.go:181] (0xc00081ba20) (0xc0008a0820) Stream added, broadcasting: 1\nI0811 00:46:44.653156 2866 log.go:181] (0xc00081ba20) Reply frame received for 1\nI0811 00:46:44.653198 2866 log.go:181] (0xc00081ba20) (0xc00050eb40) Create stream\nI0811 00:46:44.653209 2866 log.go:181] (0xc00081ba20) (0xc00050eb40) Stream added, broadcasting: 3\nI0811 00:46:44.654107 2866 log.go:181] (0xc00081ba20) Reply frame received for 3\nI0811 00:46:44.654151 2866 log.go:181] (0xc00081ba20) (0xc0004bebe0) Create stream\nI0811 00:46:44.654166 2866 log.go:181] (0xc00081ba20) (0xc0004bebe0) Stream added, broadcasting: 5\nI0811 00:46:44.655482 2866 log.go:181] (0xc00081ba20) Reply frame received for 5\nI0811 00:46:44.713391 2866 log.go:181] (0xc00081ba20) Data frame received for 5\nI0811 00:46:44.713418 2866 log.go:181] (0xc0004bebe0) (5) Data frame handling\nI0811 00:46:44.713434 2866 log.go:181] (0xc0004bebe0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0811 00:46:44.746680 2866 log.go:181] (0xc00081ba20) Data frame received for 3\nI0811 00:46:44.746714 2866 log.go:181] (0xc00050eb40) (3) Data frame handling\nI0811 00:46:44.746737 2866 log.go:181] (0xc00050eb40) (3) Data frame sent\nI0811 00:46:44.746750 2866 log.go:181] (0xc00081ba20) Data frame received for 3\nI0811 00:46:44.746760 2866 log.go:181] (0xc00050eb40) (3) Data frame handling\nI0811 00:46:44.747216 2866 log.go:181] (0xc00081ba20) Data frame received for 5\nI0811 00:46:44.747241 2866 log.go:181] (0xc0004bebe0) (5) Data frame handling\nI0811 00:46:44.749219 2866 log.go:181] (0xc00081ba20) Data frame received for 1\nI0811 00:46:44.749251 2866 log.go:181] (0xc0008a0820) (1) Data frame handling\nI0811 00:46:44.749271 2866 log.go:181] (0xc0008a0820) (1) Data frame sent\nI0811 00:46:44.749286 2866 log.go:181] (0xc00081ba20) (0xc0008a0820) Stream removed, broadcasting: 1\nI0811 00:46:44.749309 2866 log.go:181] (0xc00081ba20) Go away received\nI0811 00:46:44.749763 2866 log.go:181] (0xc00081ba20) (0xc0008a0820) Stream removed, broadcasting: 1\nI0811 00:46:44.749785 2866 log.go:181] (0xc00081ba20) (0xc00050eb40) Stream removed, broadcasting: 3\nI0811 00:46:44.749798 2866 log.go:181] (0xc00081ba20) (0xc0004bebe0) Stream removed, broadcasting: 5\n" Aug 11 00:46:44.757: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 11 00:46:44.757: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 11 00:46:44.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7406 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 11 00:46:44.989: INFO: stderr: "I0811 00:46:44.882117 2884 log.go:181] (0xc0006d51e0) (0xc000ce5a40) Create stream\nI0811 00:46:44.882193 2884 log.go:181] (0xc0006d51e0) (0xc000ce5a40) Stream added, broadcasting: 1\nI0811 00:46:44.887173 2884 log.go:181] (0xc0006d51e0) Reply frame received for 1\nI0811 00:46:44.887216 2884 log.go:181] (0xc0006d51e0) (0xc00054ab40) Create stream\nI0811 00:46:44.887228 2884 log.go:181] (0xc0006d51e0) (0xc00054ab40) Stream added, broadcasting: 3\nI0811 00:46:44.888346 2884 log.go:181] (0xc0006d51e0) Reply frame received for 3\nI0811 00:46:44.888388 2884 log.go:181] (0xc0006d51e0) (0xc0004901e0) Create stream\nI0811 00:46:44.888403 2884 log.go:181] (0xc0006d51e0) (0xc0004901e0) Stream added, broadcasting: 5\nI0811 00:46:44.889779 2884 log.go:181] (0xc0006d51e0) Reply frame received for 5\nI0811 00:46:44.951772 2884 log.go:181] (0xc0006d51e0) Data frame received for 5\nI0811 00:46:44.951800 2884 log.go:181] (0xc0004901e0) (5) Data frame handling\nI0811 00:46:44.951819 2884 log.go:181] (0xc0004901e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0811 00:46:44.980427 2884 log.go:181] (0xc0006d51e0) Data frame received for 3\nI0811 00:46:44.980458 2884 log.go:181] (0xc00054ab40) (3) Data frame handling\nI0811 00:46:44.980474 2884 log.go:181] (0xc00054ab40) (3) Data frame sent\nI0811 00:46:44.980707 2884 log.go:181] (0xc0006d51e0) Data frame received for 3\nI0811 00:46:44.980796 2884 log.go:181] (0xc00054ab40) (3) Data frame handling\nI0811 00:46:44.981264 2884 log.go:181] (0xc0006d51e0) Data frame received for 5\nI0811 00:46:44.981274 2884 log.go:181] (0xc0004901e0) (5) Data frame handling\nI0811 00:46:44.983162 2884 log.go:181] (0xc0006d51e0) Data frame received for 1\nI0811 00:46:44.983177 2884 log.go:181] (0xc000ce5a40) (1) Data frame handling\nI0811 00:46:44.983189 2884 log.go:181] (0xc000ce5a40) (1) Data frame sent\nI0811 00:46:44.983371 2884 log.go:181] (0xc0006d51e0) (0xc000ce5a40) Stream removed, broadcasting: 1\nI0811 00:46:44.983397 2884 log.go:181] (0xc0006d51e0) Go away received\nI0811 00:46:44.983924 2884 log.go:181] (0xc0006d51e0) (0xc000ce5a40) Stream removed, broadcasting: 1\nI0811 00:46:44.983953 2884 log.go:181] (0xc0006d51e0) (0xc00054ab40) Stream removed, broadcasting: 3\nI0811 00:46:44.983967 2884 log.go:181] (0xc0006d51e0) (0xc0004901e0) Stream removed, broadcasting: 5\n" Aug 11 00:46:44.989: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 11 00:46:44.989: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 11 00:46:44.989: INFO: Waiting for statefulset status.replicas updated to 0 Aug 11 00:46:44.992: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Aug 11 00:46:54.997: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 11 00:46:54.997: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 11 00:46:54.997: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 11 00:46:55.028: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999735s Aug 11 00:46:56.033: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.973833623s Aug 11 00:46:57.038: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.968472238s Aug 11 00:46:58.043: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.963562002s Aug 11 00:46:59.049: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.958626022s Aug 11 00:47:00.054: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.953234001s Aug 11 00:47:01.059: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.948016185s Aug 11 00:47:02.065: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.94244552s Aug 11 00:47:03.069: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.93671757s Aug 11 00:47:04.075: INFO: Verifying statefulset ss doesn't scale past 3 for another 932.565845ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7406 Aug 11 00:47:05.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7406 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 11 00:47:05.329: INFO: stderr: "I0811 00:47:05.230468 2902 log.go:181] (0xc0001f0000) (0xc00069a640) Create stream\nI0811 00:47:05.230532 2902 log.go:181] (0xc0001f0000) (0xc00069a640) Stream added, broadcasting: 1\nI0811 00:47:05.232086 2902 log.go:181] (0xc0001f0000) Reply frame received for 1\nI0811 00:47:05.232115 2902 log.go:181] (0xc0001f0000) (0xc00069b2c0) Create stream\nI0811 00:47:05.232123 2902 log.go:181] (0xc0001f0000) (0xc00069b2c0) Stream added, broadcasting: 3\nI0811 00:47:05.233123 2902 log.go:181] (0xc0001f0000) Reply frame received for 3\nI0811 00:47:05.233163 2902 log.go:181] (0xc0001f0000) (0xc0005808c0) Create stream\nI0811 00:47:05.233173 2902 log.go:181] (0xc0001f0000) (0xc0005808c0) Stream added, broadcasting: 5\nI0811 00:47:05.234034 2902 log.go:181] (0xc0001f0000) Reply frame received for 5\nI0811 00:47:05.321958 2902 log.go:181] (0xc0001f0000) Data frame received for 3\nI0811 00:47:05.322008 2902 log.go:181] (0xc00069b2c0) (3) Data frame handling\nI0811 00:47:05.322028 2902 log.go:181] (0xc00069b2c0) (3) Data frame sent\nI0811 00:47:05.322046 2902 log.go:181] (0xc0001f0000) Data frame received for 3\nI0811 00:47:05.322068 2902 log.go:181] (0xc0001f0000) Data frame received for 5\nI0811 00:47:05.322103 2902 log.go:181] (0xc0005808c0) (5) Data frame handling\nI0811 00:47:05.322118 2902 log.go:181] (0xc0005808c0) (5) Data frame sent\nI0811 00:47:05.322128 2902 log.go:181] (0xc0001f0000) Data frame received for 5\nI0811 00:47:05.322136 2902 log.go:181] (0xc0005808c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0811 00:47:05.322174 2902 log.go:181] (0xc00069b2c0) (3) Data frame handling\nI0811 00:47:05.323497 2902 log.go:181] (0xc0001f0000) Data frame received for 1\nI0811 00:47:05.323526 2902 log.go:181] (0xc00069a640) (1) Data frame handling\nI0811 00:47:05.323542 2902 log.go:181] (0xc00069a640) (1) Data frame sent\nI0811 00:47:05.323559 2902 log.go:181] (0xc0001f0000) (0xc00069a640) Stream removed, broadcasting: 1\nI0811 00:47:05.323824 2902 log.go:181] (0xc0001f0000) Go away received\nI0811 00:47:05.323995 2902 log.go:181] (0xc0001f0000) (0xc00069a640) Stream removed, broadcasting: 1\nI0811 00:47:05.324020 2902 log.go:181] (0xc0001f0000) (0xc00069b2c0) Stream removed, broadcasting: 3\nI0811 00:47:05.324044 2902 log.go:181] (0xc0001f0000) (0xc0005808c0) Stream removed, broadcasting: 5\n" Aug 11 00:47:05.329: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 11 00:47:05.329: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 11 00:47:05.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7406 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 11 00:47:05.549: INFO: stderr: "I0811 00:47:05.470480 2920 log.go:181] (0xc000f196b0) (0xc000a11180) Create stream\nI0811 00:47:05.470573 2920 log.go:181] (0xc000f196b0) (0xc000a11180) Stream added, broadcasting: 1\nI0811 00:47:05.474967 2920 log.go:181] (0xc000f196b0) Reply frame received for 1\nI0811 00:47:05.475249 2920 log.go:181] (0xc000f196b0) (0xc000a65360) Create stream\nI0811 00:47:05.475358 2920 log.go:181] (0xc000f196b0) (0xc000a65360) Stream added, broadcasting: 3\nI0811 00:47:05.476474 2920 log.go:181] (0xc000f196b0) Reply frame received for 3\nI0811 00:47:05.476503 2920 log.go:181] (0xc000f196b0) (0xc000f10140) Create stream\nI0811 00:47:05.476513 2920 log.go:181] (0xc000f196b0) (0xc000f10140) Stream added, broadcasting: 5\nI0811 00:47:05.477559 2920 log.go:181] (0xc000f196b0) Reply frame received for 5\nI0811 00:47:05.545325 2920 log.go:181] (0xc000f196b0) Data frame received for 5\nI0811 00:47:05.545354 2920 log.go:181] (0xc000f10140) (5) Data frame handling\nI0811 00:47:05.545361 2920 log.go:181] (0xc000f10140) (5) Data frame sent\nI0811 00:47:05.545368 2920 log.go:181] (0xc000f196b0) Data frame received for 5\nI0811 00:47:05.545375 2920 log.go:181] (0xc000f10140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0811 00:47:05.545392 2920 log.go:181] (0xc000f196b0) Data frame received for 3\nI0811 00:47:05.545397 2920 log.go:181] (0xc000a65360) (3) Data frame handling\nI0811 00:47:05.545402 2920 log.go:181] (0xc000a65360) (3) Data frame sent\nI0811 00:47:05.545406 2920 log.go:181] (0xc000f196b0) Data frame received for 3\nI0811 00:47:05.545409 2920 log.go:181] (0xc000a65360) (3) Data frame handling\nI0811 00:47:05.546297 2920 log.go:181] (0xc000f196b0) Data frame received for 1\nI0811 00:47:05.546315 2920 log.go:181] (0xc000a11180) (1) Data frame handling\nI0811 00:47:05.546327 2920 log.go:181] (0xc000a11180) (1) Data frame sent\nI0811 00:47:05.546339 2920 log.go:181] (0xc000f196b0) (0xc000a11180) Stream removed, broadcasting: 1\nI0811 00:47:05.546351 2920 log.go:181] (0xc000f196b0) Go away received\nI0811 00:47:05.546766 2920 log.go:181] (0xc000f196b0) (0xc000a11180) Stream removed, broadcasting: 1\nI0811 00:47:05.546782 2920 log.go:181] (0xc000f196b0) (0xc000a65360) Stream removed, broadcasting: 3\nI0811 00:47:05.546790 2920 log.go:181] (0xc000f196b0) (0xc000f10140) Stream removed, broadcasting: 5\n" Aug 11 00:47:05.549: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 11 00:47:05.550: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 11 00:47:05.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7406 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 11 00:47:05.775: INFO: stderr: "I0811 00:47:05.699327 2938 log.go:181] (0xc0005c9600) (0xc000469ea0) Create stream\nI0811 00:47:05.699377 2938 log.go:181] (0xc0005c9600) (0xc000469ea0) Stream added, broadcasting: 1\nI0811 00:47:05.703351 2938 log.go:181] (0xc0005c9600) Reply frame received for 1\nI0811 00:47:05.703397 2938 log.go:181] (0xc0005c9600) (0xc0008bb7c0) Create stream\nI0811 00:47:05.703413 2938 log.go:181] (0xc0005c9600) (0xc0008bb7c0) Stream added, broadcasting: 3\nI0811 00:47:05.704230 2938 log.go:181] (0xc0005c9600) Reply frame received for 3\nI0811 00:47:05.704257 2938 log.go:181] (0xc0005c9600) (0xc00098bea0) Create stream\nI0811 00:47:05.704266 2938 log.go:181] (0xc0005c9600) (0xc00098bea0) Stream added, broadcasting: 5\nI0811 00:47:05.705176 2938 log.go:181] (0xc0005c9600) Reply frame received for 5\nI0811 00:47:05.768285 2938 log.go:181] (0xc0005c9600) Data frame received for 5\nI0811 00:47:05.768314 2938 log.go:181] (0xc00098bea0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0811 00:47:05.768333 2938 log.go:181] (0xc0005c9600) Data frame received for 3\nI0811 00:47:05.768354 2938 log.go:181] (0xc0008bb7c0) (3) Data frame handling\nI0811 00:47:05.768365 2938 log.go:181] (0xc0008bb7c0) (3) Data frame sent\nI0811 00:47:05.768377 2938 log.go:181] (0xc0005c9600) Data frame received for 3\nI0811 00:47:05.768405 2938 log.go:181] (0xc0008bb7c0) (3) Data frame handling\nI0811 00:47:05.768432 2938 log.go:181] (0xc00098bea0) (5) Data frame sent\nI0811 00:47:05.768474 2938 log.go:181] (0xc0005c9600) Data frame received for 5\nI0811 00:47:05.768483 2938 log.go:181] (0xc00098bea0) (5) Data frame handling\nI0811 00:47:05.769968 2938 log.go:181] (0xc0005c9600) Data frame received for 1\nI0811 00:47:05.769986 2938 log.go:181] (0xc000469ea0) (1) Data frame handling\nI0811 00:47:05.769995 2938 log.go:181] (0xc000469ea0) (1) Data frame sent\nI0811 00:47:05.770006 2938 log.go:181] (0xc0005c9600) (0xc000469ea0) Stream removed, broadcasting: 1\nI0811 00:47:05.770017 2938 log.go:181] (0xc0005c9600) Go away received\nI0811 00:47:05.770469 2938 log.go:181] (0xc0005c9600) (0xc000469ea0) Stream removed, broadcasting: 1\nI0811 00:47:05.770493 2938 log.go:181] (0xc0005c9600) (0xc0008bb7c0) Stream removed, broadcasting: 3\nI0811 00:47:05.770502 2938 log.go:181] (0xc0005c9600) (0xc00098bea0) Stream removed, broadcasting: 5\n" Aug 11 00:47:05.775: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 11 00:47:05.775: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 11 00:47:05.775: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 11 00:47:35.812: INFO: Deleting all statefulset in ns statefulset-7406 Aug 11 00:47:35.815: INFO: Scaling statefulset ss to 0 Aug 11 00:47:35.823: INFO: Waiting for statefulset status.replicas updated to 0 Aug 11 00:47:35.825: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:47:35.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7406" for this suite. • [SLOW TEST:92.343 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":303,"completed":233,"skipped":3822,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:47:35.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:47:42.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7056" for this suite. STEP: Destroying namespace "nsdeletetest-4286" for this suite. Aug 11 00:47:42.216: INFO: Namespace nsdeletetest-4286 was already deleted STEP: Destroying namespace "nsdeletetest-859" for this suite. • [SLOW TEST:6.372 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":303,"completed":234,"skipped":3837,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:47:42.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-6890447e-f65f-4777-a277-dbe68f3dd101 STEP: Creating a pod to test consume configMaps Aug 11 00:47:42.345: INFO: Waiting up to 5m0s for pod "pod-configmaps-5f31b4c8-7730-40b5-8a7d-bd93d9ec1126" in namespace "configmap-7388" to be "Succeeded or Failed" Aug 11 00:47:42.348: INFO: Pod "pod-configmaps-5f31b4c8-7730-40b5-8a7d-bd93d9ec1126": Phase="Pending", Reason="", readiness=false. Elapsed: 3.57602ms Aug 11 00:47:44.353: INFO: Pod "pod-configmaps-5f31b4c8-7730-40b5-8a7d-bd93d9ec1126": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008036271s Aug 11 00:47:46.357: INFO: Pod "pod-configmaps-5f31b4c8-7730-40b5-8a7d-bd93d9ec1126": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012337506s STEP: Saw pod success Aug 11 00:47:46.357: INFO: Pod "pod-configmaps-5f31b4c8-7730-40b5-8a7d-bd93d9ec1126" satisfied condition "Succeeded or Failed" Aug 11 00:47:46.359: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-5f31b4c8-7730-40b5-8a7d-bd93d9ec1126 container configmap-volume-test: STEP: delete the pod Aug 11 00:47:46.430: INFO: Waiting for pod pod-configmaps-5f31b4c8-7730-40b5-8a7d-bd93d9ec1126 to disappear Aug 11 00:47:46.434: INFO: Pod pod-configmaps-5f31b4c8-7730-40b5-8a7d-bd93d9ec1126 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:47:46.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7388" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":235,"skipped":3851,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:47:46.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:47:53.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3095" for this suite. • [SLOW TEST:7.090 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":303,"completed":236,"skipped":3880,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:47:53.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 11 00:47:53.663: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:47:53.688: INFO: Number of nodes with available pods: 0 Aug 11 00:47:53.688: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:47:54.693: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:47:54.697: INFO: Number of nodes with available pods: 0 Aug 11 00:47:54.697: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:47:55.694: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:47:55.698: INFO: Number of nodes with available pods: 0 Aug 11 00:47:55.698: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:47:56.792: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:47:56.796: INFO: Number of nodes with available pods: 0 Aug 11 00:47:56.796: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:47:57.694: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:47:57.697: INFO: Number of nodes with available pods: 1 Aug 11 00:47:57.697: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:47:58.692: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:47:58.695: INFO: Number of nodes with available pods: 2 Aug 11 00:47:58.695: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Aug 11 00:47:58.778: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:47:58.781: INFO: Number of nodes with available pods: 1 Aug 11 00:47:58.781: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:47:59.787: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:47:59.791: INFO: Number of nodes with available pods: 1 Aug 11 00:47:59.791: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:48:00.787: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:48:00.791: INFO: Number of nodes with available pods: 1 Aug 11 00:48:00.791: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:48:01.787: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:48:01.791: INFO: Number of nodes with available pods: 1 Aug 11 00:48:01.791: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:48:02.787: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:48:02.792: INFO: Number of nodes with available pods: 1 Aug 11 00:48:02.792: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:48:03.787: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:48:03.791: INFO: Number of nodes with available pods: 1 Aug 11 00:48:03.791: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:48:04.787: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:48:04.790: INFO: Number of nodes with available pods: 1 Aug 11 00:48:04.790: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:48:05.787: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:48:05.791: INFO: Number of nodes with available pods: 2 Aug 11 00:48:05.791: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7008, will wait for the garbage collector to delete the pods Aug 11 00:48:05.854: INFO: Deleting DaemonSet.extensions daemon-set took: 6.459956ms Aug 11 00:48:06.254: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.251203ms Aug 11 00:48:13.860: INFO: Number of nodes with available pods: 0 Aug 11 00:48:13.860: INFO: Number of running nodes: 0, number of available pods: 0 Aug 11 00:48:13.863: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7008/daemonsets","resourceVersion":"6061191"},"items":null} Aug 11 00:48:13.865: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7008/pods","resourceVersion":"6061191"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:48:13.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7008" for this suite. • [SLOW TEST:20.353 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":303,"completed":237,"skipped":3882,"failed":0} SSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:48:13.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:48:13.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1996" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":303,"completed":238,"skipped":3885,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:48:13.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:48:14.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4866" for this suite. STEP: Destroying namespace "nspatchtest-7b7ae6ce-5301-4846-96f1-82894aaa4e69-1261" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":303,"completed":239,"skipped":3886,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:48:14.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 11 00:48:14.887: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 11 00:48:16.898: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703694, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703694, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703695, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703694, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 11 00:48:19.964: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:48:19.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4221-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:48:21.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2789" for this suite. STEP: Destroying namespace "webhook-2789-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.000 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":303,"completed":240,"skipped":3906,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:48:21.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 11 00:48:21.317: INFO: Waiting up to 5m0s for pod "downward-api-9ded8b8e-1e50-4edc-8af4-71a94cf1054a" in namespace "downward-api-2370" to be "Succeeded or Failed" Aug 11 00:48:21.364: INFO: Pod "downward-api-9ded8b8e-1e50-4edc-8af4-71a94cf1054a": Phase="Pending", Reason="", readiness=false. Elapsed: 46.354006ms Aug 11 00:48:23.367: INFO: Pod "downward-api-9ded8b8e-1e50-4edc-8af4-71a94cf1054a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04991901s Aug 11 00:48:25.371: INFO: Pod "downward-api-9ded8b8e-1e50-4edc-8af4-71a94cf1054a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054142317s STEP: Saw pod success Aug 11 00:48:25.371: INFO: Pod "downward-api-9ded8b8e-1e50-4edc-8af4-71a94cf1054a" satisfied condition "Succeeded or Failed" Aug 11 00:48:25.374: INFO: Trying to get logs from node latest-worker2 pod downward-api-9ded8b8e-1e50-4edc-8af4-71a94cf1054a container dapi-container: STEP: delete the pod Aug 11 00:48:25.425: INFO: Waiting for pod downward-api-9ded8b8e-1e50-4edc-8af4-71a94cf1054a to disappear Aug 11 00:48:25.472: INFO: Pod downward-api-9ded8b8e-1e50-4edc-8af4-71a94cf1054a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:48:25.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2370" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":303,"completed":241,"skipped":3910,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:48:25.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:48:25.524: INFO: Creating deployment "test-recreate-deployment" Aug 11 00:48:25.555: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Aug 11 00:48:25.579: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Aug 11 00:48:27.586: INFO: Waiting deployment "test-recreate-deployment" to complete Aug 11 00:48:27.590: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703705, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703705, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703705, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703705, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 11 00:48:29.594: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Aug 11 00:48:29.602: INFO: Updating deployment test-recreate-deployment Aug 11 00:48:29.602: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 11 00:48:30.247: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-8946 /apis/apps/v1/namespaces/deployment-8946/deployments/test-recreate-deployment 540b0cea-1f5c-4f15-a35d-e29bbbe8b873 6061424 2 2020-08-11 00:48:25 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-11 00:48:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-11 00:48:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0036cb978 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-11 00:48:30 +0000 UTC,LastTransitionTime:2020-08-11 00:48:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-08-11 00:48:30 +0000 UTC,LastTransitionTime:2020-08-11 00:48:25 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Aug 11 00:48:30.250: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-8946 /apis/apps/v1/namespaces/deployment-8946/replicasets/test-recreate-deployment-f79dd4667 6d0b80cc-6edf-4910-8ecf-bbceff7d6cba 6061422 1 2020-08-11 00:48:29 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 540b0cea-1f5c-4f15-a35d-e29bbbe8b873 0xc0036cbe70 0xc0036cbe71}] [] [{kube-controller-manager Update apps/v1 2020-08-11 00:48:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"540b0cea-1f5c-4f15-a35d-e29bbbe8b873\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0036cbee8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 11 00:48:30.250: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Aug 11 00:48:30.250: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-8946 /apis/apps/v1/namespaces/deployment-8946/replicasets/test-recreate-deployment-c96cf48f 32344c47-2f8a-46d0-b3f8-845d214d05a6 6061413 2 2020-08-11 00:48:25 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 540b0cea-1f5c-4f15-a35d-e29bbbe8b873 0xc0036cbd6f 0xc0036cbd90}] [] [{kube-controller-manager Update apps/v1 2020-08-11 00:48:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"540b0cea-1f5c-4f15-a35d-e29bbbe8b873\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0036cbe08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 11 00:48:30.299: INFO: Pod "test-recreate-deployment-f79dd4667-mkks9" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-mkks9 test-recreate-deployment-f79dd4667- deployment-8946 /api/v1/namespaces/deployment-8946/pods/test-recreate-deployment-f79dd4667-mkks9 3e6ac59c-a6c4-443a-9ce4-50f1cefd6751 6061429 0 2020-08-11 00:48:29 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 6d0b80cc-6edf-4910-8ecf-bbceff7d6cba 0xc0044703d0 0xc0044703d1}] [] [{kube-controller-manager Update v1 2020-08-11 00:48:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d0b80cc-6edf-4910-8ecf-bbceff7d6cba\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:48:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gtz96,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gtz96,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gtz96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:48:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:48:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:48:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:48:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-11 00:48:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:48:30.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8946" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":242,"skipped":3915,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:48:30.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1502 STEP: creating service affinity-clusterip-transition in namespace services-1502 STEP: creating replication controller affinity-clusterip-transition in namespace services-1502 I0811 00:48:30.866801 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-1502, replica count: 3 I0811 00:48:33.917236 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0811 00:48:36.917486 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 11 00:48:36.952: INFO: Creating new exec pod Aug 11 00:48:42.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1502 execpod-affinitykg2n5 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Aug 11 00:48:42.238: INFO: stderr: "I0811 00:48:42.143270 2954 log.go:181] (0xc000e38420) (0xc0005bef00) Create stream\nI0811 00:48:42.143324 2954 log.go:181] (0xc000e38420) (0xc0005bef00) Stream added, broadcasting: 1\nI0811 00:48:42.147220 2954 log.go:181] (0xc000e38420) Reply frame received for 1\nI0811 00:48:42.147252 2954 log.go:181] (0xc000e38420) (0xc0005be000) Create stream\nI0811 00:48:42.147261 2954 log.go:181] (0xc000e38420) (0xc0005be000) Stream added, broadcasting: 3\nI0811 00:48:42.147976 2954 log.go:181] (0xc000e38420) Reply frame received for 3\nI0811 00:48:42.148003 2954 log.go:181] (0xc000e38420) (0xc0005be0a0) Create stream\nI0811 00:48:42.148012 2954 log.go:181] (0xc000e38420) (0xc0005be0a0) Stream added, broadcasting: 5\nI0811 00:48:42.148795 2954 log.go:181] (0xc000e38420) Reply frame received for 5\nI0811 00:48:42.230760 2954 log.go:181] (0xc000e38420) Data frame received for 3\nI0811 00:48:42.230821 2954 log.go:181] (0xc0005be000) (3) Data frame handling\nI0811 00:48:42.230854 2954 log.go:181] (0xc000e38420) Data frame received for 5\nI0811 00:48:42.230871 2954 log.go:181] (0xc0005be0a0) (5) Data frame handling\nI0811 00:48:42.230889 2954 log.go:181] (0xc0005be0a0) (5) Data frame sent\nI0811 00:48:42.230904 2954 log.go:181] (0xc000e38420) Data frame received for 5\nI0811 00:48:42.230917 2954 log.go:181] (0xc0005be0a0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0811 00:48:42.233189 2954 log.go:181] (0xc000e38420) Data frame received for 1\nI0811 00:48:42.233238 2954 log.go:181] (0xc0005bef00) (1) Data frame handling\nI0811 00:48:42.233266 2954 log.go:181] (0xc0005bef00) (1) Data frame sent\nI0811 00:48:42.233294 2954 log.go:181] (0xc000e38420) (0xc0005bef00) Stream removed, broadcasting: 1\nI0811 00:48:42.233345 2954 log.go:181] (0xc000e38420) Go away received\nI0811 00:48:42.233825 2954 log.go:181] (0xc000e38420) (0xc0005bef00) Stream removed, broadcasting: 1\nI0811 00:48:42.233847 2954 log.go:181] (0xc000e38420) (0xc0005be000) Stream removed, broadcasting: 3\nI0811 00:48:42.233859 2954 log.go:181] (0xc000e38420) (0xc0005be0a0) Stream removed, broadcasting: 5\n" Aug 11 00:48:42.238: INFO: stdout: "" Aug 11 00:48:42.239: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1502 execpod-affinitykg2n5 -- /bin/sh -x -c nc -zv -t -w 2 10.105.24.7 80' Aug 11 00:48:42.453: INFO: stderr: "I0811 00:48:42.371094 2972 log.go:181] (0xc000cded10) (0xc000e1a280) Create stream\nI0811 00:48:42.371160 2972 log.go:181] (0xc000cded10) (0xc000e1a280) Stream added, broadcasting: 1\nI0811 00:48:42.375838 2972 log.go:181] (0xc000cded10) Reply frame received for 1\nI0811 00:48:42.375876 2972 log.go:181] (0xc000cded10) (0xc0006181e0) Create stream\nI0811 00:48:42.375887 2972 log.go:181] (0xc000cded10) (0xc0006181e0) Stream added, broadcasting: 3\nI0811 00:48:42.376884 2972 log.go:181] (0xc000cded10) Reply frame received for 3\nI0811 00:48:42.376912 2972 log.go:181] (0xc000cded10) (0xc0008b8500) Create stream\nI0811 00:48:42.376921 2972 log.go:181] (0xc000cded10) (0xc0008b8500) Stream added, broadcasting: 5\nI0811 00:48:42.377844 2972 log.go:181] (0xc000cded10) Reply frame received for 5\nI0811 00:48:42.446297 2972 log.go:181] (0xc000cded10) Data frame received for 5\nI0811 00:48:42.446339 2972 log.go:181] (0xc0008b8500) (5) Data frame handling\nI0811 00:48:42.446365 2972 log.go:181] (0xc0008b8500) (5) Data frame sent\nI0811 00:48:42.446381 2972 log.go:181] (0xc000cded10) Data frame received for 5\nI0811 00:48:42.446393 2972 log.go:181] (0xc0008b8500) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.24.7 80\nConnection to 10.105.24.7 80 port [tcp/http] succeeded!\nI0811 00:48:42.446428 2972 log.go:181] (0xc000cded10) Data frame received for 3\nI0811 00:48:42.446458 2972 log.go:181] (0xc0006181e0) (3) Data frame handling\nI0811 00:48:42.447521 2972 log.go:181] (0xc000cded10) Data frame received for 1\nI0811 00:48:42.447548 2972 log.go:181] (0xc000e1a280) (1) Data frame handling\nI0811 00:48:42.447570 2972 log.go:181] (0xc000e1a280) (1) Data frame sent\nI0811 00:48:42.447585 2972 log.go:181] (0xc000cded10) (0xc000e1a280) Stream removed, broadcasting: 1\nI0811 00:48:42.447607 2972 log.go:181] (0xc000cded10) Go away received\nI0811 00:48:42.447931 2972 log.go:181] (0xc000cded10) (0xc000e1a280) Stream removed, broadcasting: 1\nI0811 00:48:42.447954 2972 log.go:181] (0xc000cded10) (0xc0006181e0) Stream removed, broadcasting: 3\nI0811 00:48:42.447963 2972 log.go:181] (0xc000cded10) (0xc0008b8500) Stream removed, broadcasting: 5\n" Aug 11 00:48:42.453: INFO: stdout: "" Aug 11 00:48:42.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1502 execpod-affinitykg2n5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.105.24.7:80/ ; done' Aug 11 00:48:42.783: INFO: stderr: "I0811 00:48:42.597866 2990 log.go:181] (0xc0008ed340) (0xc000502fa0) Create stream\nI0811 00:48:42.598010 2990 log.go:181] (0xc0008ed340) (0xc000502fa0) Stream added, broadcasting: 1\nI0811 00:48:42.606402 2990 log.go:181] (0xc0008ed340) Reply frame received for 1\nI0811 00:48:42.606451 2990 log.go:181] (0xc0008ed340) (0xc0008d1040) Create stream\nI0811 00:48:42.606463 2990 log.go:181] (0xc0008ed340) (0xc0008d1040) Stream added, broadcasting: 3\nI0811 00:48:42.607635 2990 log.go:181] (0xc0008ed340) Reply frame received for 3\nI0811 00:48:42.607670 2990 log.go:181] (0xc0008ed340) (0xc0008cc320) Create stream\nI0811 00:48:42.607684 2990 log.go:181] (0xc0008ed340) (0xc0008cc320) Stream added, broadcasting: 5\nI0811 00:48:42.608519 2990 log.go:181] (0xc0008ed340) Reply frame received for 5\nI0811 00:48:42.657438 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.657477 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\nI0811 00:48:42.657494 2990 log.go:181] (0xc0008cc320) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:42.657525 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.657542 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.657560 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.663947 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.663970 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.663990 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.664696 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.664708 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\nI0811 00:48:42.664714 2990 log.go:181] (0xc0008cc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:42.665039 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.665064 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.665096 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.672093 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.672108 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.672114 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.673123 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.673134 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.673150 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.673170 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\nI0811 00:48:42.673182 2990 log.go:181] (0xc0008cc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:42.673198 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.680044 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.680069 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.680107 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.681344 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.681367 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.681377 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.681397 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.681414 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\nI0811 00:48:42.681429 2990 log.go:181] (0xc0008cc320) (5) Data frame sent\nI0811 00:48:42.681437 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.681442 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:42.681456 2990 log.go:181] (0xc0008cc320) (5) Data frame sent\nI0811 00:48:42.687239 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.687259 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.687274 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.688155 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.688185 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.688198 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.688217 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.688231 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\nI0811 00:48:42.688241 2990 log.go:181] (0xc0008cc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:42.694672 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.694689 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.694703 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.695881 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.695894 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.695903 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.695932 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.695950 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\nI0811 00:48:42.695967 2990 log.go:181] (0xc0008cc320) (5) Data frame sent\nI0811 00:48:42.695979 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.695992 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:42.696024 2990 log.go:181] (0xc0008cc320) (5) Data frame sent\nI0811 00:48:42.701751 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.701779 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.701805 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.702334 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.702368 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.702405 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.702432 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.702450 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\nI0811 00:48:42.702470 2990 log.go:181] (0xc0008cc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:42.709555 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.709586 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.709613 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.710463 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.710506 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.710521 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.710538 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.710548 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\nI0811 00:48:42.710558 2990 log.go:181] (0xc0008cc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:42.717438 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.717470 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.717496 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.717972 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.718000 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\nI0811 00:48:42.718020 2990 log.go:181] (0xc0008cc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:42.718138 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.718156 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.718192 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.723960 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.723980 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.723991 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.724694 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.724714 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.724845 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.724881 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.724930 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\nI0811 00:48:42.724968 2990 log.go:181] (0xc0008cc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:42.731094 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.731120 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.731140 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.731868 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.731897 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.731929 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\nI0811 00:48:42.731942 2990 log.go:181] (0xc0008cc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:42.731973 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.731998 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.737575 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.737593 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.737602 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.738356 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.738370 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.738377 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.738413 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.738452 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\nI0811 00:48:42.738473 2990 log.go:181] (0xc0008cc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:42.745024 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.745046 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.745071 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.745830 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.745846 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.745856 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.745874 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.745898 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\nI0811 00:48:42.745919 2990 log.go:181] (0xc0008cc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:42.753147 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.753179 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.753214 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.753698 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.753710 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.753716 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.753761 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.753793 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\nI0811 00:48:42.753813 2990 log.go:181] (0xc0008cc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:42.760257 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.760429 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.760468 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.761056 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.761101 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.761120 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.761147 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.761168 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\nI0811 00:48:42.761193 2990 log.go:181] (0xc0008cc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:42.767229 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.767245 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.767257 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.767756 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.767772 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.767782 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.767842 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.767862 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\nI0811 00:48:42.767881 2990 log.go:181] (0xc0008cc320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:42.775187 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.775222 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.775250 2990 log.go:181] (0xc0008d1040) (3) Data frame sent\nI0811 00:48:42.776022 2990 log.go:181] (0xc0008ed340) Data frame received for 3\nI0811 00:48:42.776054 2990 log.go:181] (0xc0008d1040) (3) Data frame handling\nI0811 00:48:42.776275 2990 log.go:181] (0xc0008ed340) Data frame received for 5\nI0811 00:48:42.776298 2990 log.go:181] (0xc0008cc320) (5) Data frame handling\nI0811 00:48:42.778262 2990 log.go:181] (0xc0008ed340) Data frame received for 1\nI0811 00:48:42.778280 2990 log.go:181] (0xc000502fa0) (1) Data frame handling\nI0811 00:48:42.778290 2990 log.go:181] (0xc000502fa0) (1) Data frame sent\nI0811 00:48:42.778380 2990 log.go:181] (0xc0008ed340) (0xc000502fa0) Stream removed, broadcasting: 1\nI0811 00:48:42.778469 2990 log.go:181] (0xc0008ed340) Go away received\nI0811 00:48:42.778883 2990 log.go:181] (0xc0008ed340) (0xc000502fa0) Stream removed, broadcasting: 1\nI0811 00:48:42.778918 2990 log.go:181] (0xc0008ed340) (0xc0008d1040) Stream removed, broadcasting: 3\nI0811 00:48:42.778944 2990 log.go:181] (0xc0008ed340) (0xc0008cc320) Stream removed, broadcasting: 5\n" Aug 11 00:48:42.784: INFO: stdout: "\naffinity-clusterip-transition-5lll5\naffinity-clusterip-transition-wbqzs\naffinity-clusterip-transition-wbqzs\naffinity-clusterip-transition-wbqzs\naffinity-clusterip-transition-5lll5\naffinity-clusterip-transition-wbqzs\naffinity-clusterip-transition-wbqzs\naffinity-clusterip-transition-wng64\naffinity-clusterip-transition-5lll5\naffinity-clusterip-transition-5lll5\naffinity-clusterip-transition-5lll5\naffinity-clusterip-transition-5lll5\naffinity-clusterip-transition-wbqzs\naffinity-clusterip-transition-wbqzs\naffinity-clusterip-transition-wbqzs\naffinity-clusterip-transition-wbqzs" Aug 11 00:48:42.784: INFO: Received response from host: affinity-clusterip-transition-5lll5 Aug 11 00:48:42.784: INFO: Received response from host: affinity-clusterip-transition-wbqzs Aug 11 00:48:42.784: INFO: Received response from host: affinity-clusterip-transition-wbqzs Aug 11 00:48:42.784: INFO: Received response from host: affinity-clusterip-transition-wbqzs Aug 11 00:48:42.784: INFO: Received response from host: affinity-clusterip-transition-5lll5 Aug 11 00:48:42.784: INFO: Received response from host: affinity-clusterip-transition-wbqzs Aug 11 00:48:42.784: INFO: Received response from host: affinity-clusterip-transition-wbqzs Aug 11 00:48:42.784: INFO: Received response from host: affinity-clusterip-transition-wng64 Aug 11 00:48:42.784: INFO: Received response from host: affinity-clusterip-transition-5lll5 Aug 11 00:48:42.784: INFO: Received response from host: affinity-clusterip-transition-5lll5 Aug 11 00:48:42.784: INFO: Received response from host: affinity-clusterip-transition-5lll5 Aug 11 00:48:42.784: INFO: Received response from host: affinity-clusterip-transition-5lll5 Aug 11 00:48:42.784: INFO: Received response from host: affinity-clusterip-transition-wbqzs Aug 11 00:48:42.784: INFO: Received response from host: affinity-clusterip-transition-wbqzs Aug 11 00:48:42.784: INFO: Received response from host: affinity-clusterip-transition-wbqzs Aug 11 00:48:42.784: INFO: Received response from host: affinity-clusterip-transition-wbqzs Aug 11 00:48:42.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1502 execpod-affinitykg2n5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.105.24.7:80/ ; done' Aug 11 00:48:43.114: INFO: stderr: "I0811 00:48:42.937904 3009 log.go:181] (0xc00003a160) (0xc00091d2c0) Create stream\nI0811 00:48:42.937947 3009 log.go:181] (0xc00003a160) (0xc00091d2c0) Stream added, broadcasting: 1\nI0811 00:48:42.939357 3009 log.go:181] (0xc00003a160) Reply frame received for 1\nI0811 00:48:42.939387 3009 log.go:181] (0xc00003a160) (0xc0008b4e60) Create stream\nI0811 00:48:42.939396 3009 log.go:181] (0xc00003a160) (0xc0008b4e60) Stream added, broadcasting: 3\nI0811 00:48:42.940234 3009 log.go:181] (0xc00003a160) Reply frame received for 3\nI0811 00:48:42.940250 3009 log.go:181] (0xc00003a160) (0xc00091d860) Create stream\nI0811 00:48:42.940256 3009 log.go:181] (0xc00003a160) (0xc00091d860) Stream added, broadcasting: 5\nI0811 00:48:42.941203 3009 log.go:181] (0xc00003a160) Reply frame received for 5\nI0811 00:48:43.007298 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.007337 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.007349 3009 log.go:181] (0xc00091d860) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:43.007364 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.007374 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.007382 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.010540 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.010650 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.010680 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.011574 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.011621 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.011644 3009 log.go:181] (0xc00091d860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:43.011662 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.011699 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.011731 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.017482 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.017494 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.017500 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.018201 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.018239 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.018252 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.018267 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.018275 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.018287 3009 log.go:181] (0xc00091d860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:43.023710 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.023722 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.023727 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.029186 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.029214 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.029223 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.029233 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.029239 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.029244 3009 log.go:181] (0xc00091d860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:43.030206 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.030226 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.030238 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.030919 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.030946 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.030959 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.030979 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.030987 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.030999 3009 log.go:181] (0xc00091d860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:43.035177 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.035198 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.035215 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.035516 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.035532 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.035559 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.035566 3009 log.go:181] (0xc00091d860) (5) Data frame sent\nI0811 00:48:43.035572 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.035577 3009 log.go:181] (0xc00091d860) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:43.035590 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.035614 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.035631 3009 log.go:181] (0xc00091d860) (5) Data frame sent\nI0811 00:48:43.042404 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.042420 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.042433 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.043005 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.043024 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.043037 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.043056 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.043067 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.043080 3009 log.go:181] (0xc00091d860) (5) Data frame sent\n+ echo\nI0811 00:48:43.043093 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.043130 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.043151 3009 log.go:181] (0xc00091d860) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:43.048283 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.048314 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.048349 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.048712 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.048830 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.048849 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.048865 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.048881 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.048895 3009 log.go:181] (0xc00091d860) (5) Data frame sent\nI0811 00:48:43.048905 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.048911 3009 log.go:181] (0xc00091d860) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:43.048924 3009 log.go:181] (0xc00091d860) (5) Data frame sent\nI0811 00:48:43.053833 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.053912 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.053936 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.054533 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.054568 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.054578 3009 log.go:181] (0xc00091d860) (5) Data frame sent\n+ echo\n+ curl -q -sI0811 00:48:43.054595 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.054623 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.054638 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.054658 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.054680 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.054699 3009 log.go:181] (0xc00091d860) (5) Data frame sent\n --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:43.059607 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.059635 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.059659 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.060313 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.060349 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.060365 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.060386 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.060398 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.060411 3009 log.go:181] (0xc00091d860) (5) Data frame sent\n+ echo\n+ curl -q -sI0811 00:48:43.060424 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.060464 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.060482 3009 log.go:181] (0xc00091d860) (5) Data frame sent\n --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:43.065223 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.065242 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.065267 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.066092 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.066120 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.066145 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.066159 3009 log.go:181] (0xc00091d860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:43.066183 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.066219 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.071653 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.071780 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.071829 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.072374 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.072392 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.072397 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.072406 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.072412 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.072417 3009 log.go:181] (0xc00091d860) (5) Data frame sent\nI0811 00:48:43.072422 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.072426 3009 log.go:181] (0xc00091d860) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:43.072436 3009 log.go:181] (0xc00091d860) (5) Data frame sent\nI0811 00:48:43.079270 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.079290 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.079320 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.080096 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.080111 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.080147 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.080183 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.080198 3009 log.go:181] (0xc00091d860) (5) Data frame sent\nI0811 00:48:43.080215 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:43.086792 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.086814 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.086832 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.087654 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.087696 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.087720 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.087741 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.087773 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.087799 3009 log.go:181] (0xc00091d860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:43.092856 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.092882 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.092907 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.093261 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.093290 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.093320 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.093344 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.093361 3009 log.go:181] (0xc00091d860) (5) Data frame sent\nI0811 00:48:43.093370 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.093377 3009 log.go:181] (0xc00091d860) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:43.093389 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.093402 3009 log.go:181] (0xc00091d860) (5) Data frame sent\nI0811 00:48:43.096916 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.096951 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.096974 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.097643 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.097682 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.097693 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.097707 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.097714 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.097722 3009 log.go:181] (0xc00091d860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.24.7:80/\nI0811 00:48:43.104908 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.104946 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.104969 3009 log.go:181] (0xc0008b4e60) (3) Data frame sent\nI0811 00:48:43.105424 3009 log.go:181] (0xc00003a160) Data frame received for 5\nI0811 00:48:43.105468 3009 log.go:181] (0xc00091d860) (5) Data frame handling\nI0811 00:48:43.105649 3009 log.go:181] (0xc00003a160) Data frame received for 3\nI0811 00:48:43.105674 3009 log.go:181] (0xc0008b4e60) (3) Data frame handling\nI0811 00:48:43.107364 3009 log.go:181] (0xc00003a160) Data frame received for 1\nI0811 00:48:43.107382 3009 log.go:181] (0xc00091d2c0) (1) Data frame handling\nI0811 00:48:43.107393 3009 log.go:181] (0xc00091d2c0) (1) Data frame sent\nI0811 00:48:43.107404 3009 log.go:181] (0xc00003a160) (0xc00091d2c0) Stream removed, broadcasting: 1\nI0811 00:48:43.107519 3009 log.go:181] (0xc00003a160) Go away received\nI0811 00:48:43.107745 3009 log.go:181] (0xc00003a160) (0xc00091d2c0) Stream removed, broadcasting: 1\nI0811 00:48:43.107763 3009 log.go:181] (0xc00003a160) (0xc0008b4e60) Stream removed, broadcasting: 3\nI0811 00:48:43.107772 3009 log.go:181] (0xc00003a160) (0xc00091d860) Stream removed, broadcasting: 5\n" Aug 11 00:48:43.115: INFO: stdout: "\naffinity-clusterip-transition-wng64\naffinity-clusterip-transition-wng64\naffinity-clusterip-transition-wng64\naffinity-clusterip-transition-wng64\naffinity-clusterip-transition-wng64\naffinity-clusterip-transition-wng64\naffinity-clusterip-transition-wng64\naffinity-clusterip-transition-wng64\naffinity-clusterip-transition-wng64\naffinity-clusterip-transition-wng64\naffinity-clusterip-transition-wng64\naffinity-clusterip-transition-wng64\naffinity-clusterip-transition-wng64\naffinity-clusterip-transition-wng64\naffinity-clusterip-transition-wng64\naffinity-clusterip-transition-wng64" Aug 11 00:48:43.115: INFO: Received response from host: affinity-clusterip-transition-wng64 Aug 11 00:48:43.115: INFO: Received response from host: affinity-clusterip-transition-wng64 Aug 11 00:48:43.115: INFO: Received response from host: affinity-clusterip-transition-wng64 Aug 11 00:48:43.115: INFO: Received response from host: affinity-clusterip-transition-wng64 Aug 11 00:48:43.115: INFO: Received response from host: affinity-clusterip-transition-wng64 Aug 11 00:48:43.115: INFO: Received response from host: affinity-clusterip-transition-wng64 Aug 11 00:48:43.115: INFO: Received response from host: affinity-clusterip-transition-wng64 Aug 11 00:48:43.115: INFO: Received response from host: affinity-clusterip-transition-wng64 Aug 11 00:48:43.115: INFO: Received response from host: affinity-clusterip-transition-wng64 Aug 11 00:48:43.115: INFO: Received response from host: affinity-clusterip-transition-wng64 Aug 11 00:48:43.115: INFO: Received response from host: affinity-clusterip-transition-wng64 Aug 11 00:48:43.115: INFO: Received response from host: affinity-clusterip-transition-wng64 Aug 11 00:48:43.115: INFO: Received response from host: affinity-clusterip-transition-wng64 Aug 11 00:48:43.115: INFO: Received response from host: affinity-clusterip-transition-wng64 Aug 11 00:48:43.115: INFO: Received response from host: affinity-clusterip-transition-wng64 Aug 11 00:48:43.115: INFO: Received response from host: affinity-clusterip-transition-wng64 Aug 11 00:48:43.115: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-1502, will wait for the garbage collector to delete the pods Aug 11 00:48:43.320: INFO: Deleting ReplicationController affinity-clusterip-transition took: 115.73109ms Aug 11 00:48:43.620: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 300.287214ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:48:53.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1502" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:23.017 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":243,"skipped":3991,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:48:53.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1415 STEP: creating an pod Aug 11 00:48:53.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --namespace=kubectl-7078 -- logs-generator --log-lines-total 100 --run-duration 20s' Aug 11 00:48:53.543: INFO: stderr: "" Aug 11 00:48:53.543: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Aug 11 00:48:53.543: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Aug 11 00:48:53.543: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7078" to be "running and ready, or succeeded" Aug 11 00:48:53.552: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.681546ms Aug 11 00:48:55.556: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012752149s Aug 11 00:48:57.559: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.015963972s Aug 11 00:48:57.559: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Aug 11 00:48:57.559: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Aug 11 00:48:57.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7078' Aug 11 00:48:57.674: INFO: stderr: "" Aug 11 00:48:57.674: INFO: stdout: "I0811 00:48:56.372269 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/w9bl 492\nI0811 00:48:56.572400 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/bwhp 498\nI0811 00:48:56.772419 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/rbxq 532\nI0811 00:48:56.972413 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/rmx 279\nI0811 00:48:57.172414 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/26th 399\nI0811 00:48:57.372398 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/xq5 394\nI0811 00:48:57.572435 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/6d8 460\n" Aug 11 00:48:59.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7078' Aug 11 00:48:59.817: INFO: stderr: "" Aug 11 00:48:59.817: INFO: stdout: "I0811 00:48:56.372269 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/w9bl 492\nI0811 00:48:56.572400 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/bwhp 498\nI0811 00:48:56.772419 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/rbxq 532\nI0811 00:48:56.972413 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/rmx 279\nI0811 00:48:57.172414 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/26th 399\nI0811 00:48:57.372398 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/xq5 394\nI0811 00:48:57.572435 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/6d8 460\nI0811 00:48:57.772469 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/vkj 351\nI0811 00:48:57.972380 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/sq4 495\nI0811 00:48:58.172391 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/qf7 258\nI0811 00:48:58.372411 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/rntz 331\nI0811 00:48:58.572466 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/hmx 439\nI0811 00:48:58.772424 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/52tx 547\nI0811 00:48:58.972405 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/qn6 292\nI0811 00:48:59.172434 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/254 344\nI0811 00:48:59.372420 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/gvk5 201\nI0811 00:48:59.572402 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/h4v 247\nI0811 00:48:59.772403 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/pth 258\n" STEP: limiting log lines Aug 11 00:48:59.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7078 --tail=1' Aug 11 00:48:59.931: INFO: stderr: "" Aug 11 00:48:59.931: INFO: stdout: "I0811 00:48:59.772403 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/pth 258\n" Aug 11 00:48:59.932: INFO: got output "I0811 00:48:59.772403 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/pth 258\n" STEP: limiting log bytes Aug 11 00:48:59.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7078 --limit-bytes=1' Aug 11 00:49:00.033: INFO: stderr: "" Aug 11 00:49:00.033: INFO: stdout: "I" Aug 11 00:49:00.033: INFO: got output "I" STEP: exposing timestamps Aug 11 00:49:00.033: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7078 --tail=1 --timestamps' Aug 11 00:49:00.141: INFO: stderr: "" Aug 11 00:49:00.141: INFO: stdout: "2020-08-11T00:48:59.972988648Z I0811 00:48:59.972437 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/grg 398\n" Aug 11 00:49:00.141: INFO: got output "2020-08-11T00:48:59.972988648Z I0811 00:48:59.972437 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/grg 398\n" STEP: restricting to a time range Aug 11 00:49:02.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7078 --since=1s' Aug 11 00:49:02.755: INFO: stderr: "" Aug 11 00:49:02.755: INFO: stdout: "I0811 00:49:01.772409 1 logs_generator.go:76] 27 GET /api/v1/namespaces/default/pods/9xb 564\nI0811 00:49:01.972422 1 logs_generator.go:76] 28 GET /api/v1/namespaces/ns/pods/c5h 545\nI0811 00:49:02.172402 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/kube-system/pods/2w5 376\nI0811 00:49:02.372459 1 logs_generator.go:76] 30 POST /api/v1/namespaces/kube-system/pods/ptm 456\nI0811 00:49:02.572409 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/kube-system/pods/lwc 506\n" Aug 11 00:49:02.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7078 --since=24h' Aug 11 00:49:02.865: INFO: stderr: "" Aug 11 00:49:02.865: INFO: stdout: "I0811 00:48:56.372269 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/w9bl 492\nI0811 00:48:56.572400 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/bwhp 498\nI0811 00:48:56.772419 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/rbxq 532\nI0811 00:48:56.972413 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/rmx 279\nI0811 00:48:57.172414 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/26th 399\nI0811 00:48:57.372398 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/xq5 394\nI0811 00:48:57.572435 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/6d8 460\nI0811 00:48:57.772469 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/vkj 351\nI0811 00:48:57.972380 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/sq4 495\nI0811 00:48:58.172391 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/qf7 258\nI0811 00:48:58.372411 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/rntz 331\nI0811 00:48:58.572466 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/hmx 439\nI0811 00:48:58.772424 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/52tx 547\nI0811 00:48:58.972405 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/qn6 292\nI0811 00:48:59.172434 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/254 344\nI0811 00:48:59.372420 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/gvk5 201\nI0811 00:48:59.572402 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/h4v 247\nI0811 00:48:59.772403 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/pth 258\nI0811 00:48:59.972437 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/grg 398\nI0811 00:49:00.172405 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/w45f 468\nI0811 00:49:00.372456 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/lzz5 538\nI0811 00:49:00.572413 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/jwk 307\nI0811 00:49:00.772415 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/2zd 419\nI0811 00:49:00.972401 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/pt8r 282\nI0811 00:49:01.172427 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/p7j 475\nI0811 00:49:01.372405 1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/mm5 260\nI0811 00:49:01.572429 1 logs_generator.go:76] 26 POST /api/v1/namespaces/kube-system/pods/4m9 398\nI0811 00:49:01.772409 1 logs_generator.go:76] 27 GET /api/v1/namespaces/default/pods/9xb 564\nI0811 00:49:01.972422 1 logs_generator.go:76] 28 GET /api/v1/namespaces/ns/pods/c5h 545\nI0811 00:49:02.172402 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/kube-system/pods/2w5 376\nI0811 00:49:02.372459 1 logs_generator.go:76] 30 POST /api/v1/namespaces/kube-system/pods/ptm 456\nI0811 00:49:02.572409 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/kube-system/pods/lwc 506\nI0811 00:49:02.772379 1 logs_generator.go:76] 32 PUT /api/v1/namespaces/default/pods/c8d5 260\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 Aug 11 00:49:02.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-7078' Aug 11 00:49:13.841: INFO: stderr: "" Aug 11 00:49:13.841: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:49:13.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7078" for this suite. • [SLOW TEST:20.460 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":303,"completed":244,"skipped":4001,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:49:13.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 11 00:49:13.943: INFO: Waiting up to 5m0s for pod "downwardapi-volume-00d70695-ba3c-4669-84a7-09f871fd55ec" in namespace "downward-api-7881" to be "Succeeded or Failed" Aug 11 00:49:13.962: INFO: Pod "downwardapi-volume-00d70695-ba3c-4669-84a7-09f871fd55ec": Phase="Pending", Reason="", readiness=false. Elapsed: 18.482682ms Aug 11 00:49:15.966: INFO: Pod "downwardapi-volume-00d70695-ba3c-4669-84a7-09f871fd55ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023155076s Aug 11 00:49:17.970: INFO: Pod "downwardapi-volume-00d70695-ba3c-4669-84a7-09f871fd55ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027001093s STEP: Saw pod success Aug 11 00:49:17.970: INFO: Pod "downwardapi-volume-00d70695-ba3c-4669-84a7-09f871fd55ec" satisfied condition "Succeeded or Failed" Aug 11 00:49:17.973: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-00d70695-ba3c-4669-84a7-09f871fd55ec container client-container: STEP: delete the pod Aug 11 00:49:18.067: INFO: Waiting for pod downwardapi-volume-00d70695-ba3c-4669-84a7-09f871fd55ec to disappear Aug 11 00:49:18.073: INFO: Pod downwardapi-volume-00d70695-ba3c-4669-84a7-09f871fd55ec no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:49:18.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7881" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":245,"skipped":4019,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:49:18.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8058, will wait for the garbage collector to delete the pods Aug 11 00:49:24.196: INFO: Deleting Job.batch foo took: 6.979315ms Aug 11 00:49:24.697: INFO: Terminating Job.batch foo pods took: 500.257ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:50:03.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8058" for this suite. • [SLOW TEST:45.834 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":303,"completed":246,"skipped":4040,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:50:03.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:50:20.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7377" for this suite. • [SLOW TEST:16.400 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":303,"completed":247,"skipped":4077,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:50:20.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8809 STEP: creating service affinity-clusterip in namespace services-8809 STEP: creating replication controller affinity-clusterip in namespace services-8809 I0811 00:50:20.547854 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-8809, replica count: 3 I0811 00:50:23.598222 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0811 00:50:26.598450 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 11 00:50:26.605: INFO: Creating new exec pod Aug 11 00:50:31.639: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-8809 execpod-affinity46hdn -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Aug 11 00:50:31.871: INFO: stderr: "I0811 00:50:31.778833 3189 log.go:181] (0xc000fa3290) (0xc0002e6b40) Create stream\nI0811 00:50:31.778884 3189 log.go:181] (0xc000fa3290) (0xc0002e6b40) Stream added, broadcasting: 1\nI0811 00:50:31.781085 3189 log.go:181] (0xc000fa3290) Reply frame received for 1\nI0811 00:50:31.781145 3189 log.go:181] (0xc000fa3290) (0xc0002e6be0) Create stream\nI0811 00:50:31.781163 3189 log.go:181] (0xc000fa3290) (0xc0002e6be0) Stream added, broadcasting: 3\nI0811 00:50:31.782060 3189 log.go:181] (0xc000fa3290) Reply frame received for 3\nI0811 00:50:31.782095 3189 log.go:181] (0xc000fa3290) (0xc0004e5720) Create stream\nI0811 00:50:31.782113 3189 log.go:181] (0xc000fa3290) (0xc0004e5720) Stream added, broadcasting: 5\nI0811 00:50:31.782941 3189 log.go:181] (0xc000fa3290) Reply frame received for 5\nI0811 00:50:31.862120 3189 log.go:181] (0xc000fa3290) Data frame received for 5\nI0811 00:50:31.862169 3189 log.go:181] (0xc0004e5720) (5) Data frame handling\nI0811 00:50:31.862205 3189 log.go:181] (0xc0004e5720) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0811 00:50:31.862340 3189 log.go:181] (0xc000fa3290) Data frame received for 5\nI0811 00:50:31.862364 3189 log.go:181] (0xc0004e5720) (5) Data frame handling\nI0811 00:50:31.862393 3189 log.go:181] (0xc0004e5720) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0811 00:50:31.862682 3189 log.go:181] (0xc000fa3290) Data frame received for 3\nI0811 00:50:31.862711 3189 log.go:181] (0xc0002e6be0) (3) Data frame handling\nI0811 00:50:31.862846 3189 log.go:181] (0xc000fa3290) Data frame received for 5\nI0811 00:50:31.862879 3189 log.go:181] (0xc0004e5720) (5) Data frame handling\nI0811 00:50:31.864986 3189 log.go:181] (0xc000fa3290) Data frame received for 1\nI0811 00:50:31.865019 3189 log.go:181] (0xc0002e6b40) (1) Data frame handling\nI0811 00:50:31.865052 3189 log.go:181] (0xc0002e6b40) (1) Data frame sent\nI0811 00:50:31.865093 3189 log.go:181] (0xc000fa3290) (0xc0002e6b40) Stream removed, broadcasting: 1\nI0811 00:50:31.865125 3189 log.go:181] (0xc000fa3290) Go away received\nI0811 00:50:31.865495 3189 log.go:181] (0xc000fa3290) (0xc0002e6b40) Stream removed, broadcasting: 1\nI0811 00:50:31.865516 3189 log.go:181] (0xc000fa3290) (0xc0002e6be0) Stream removed, broadcasting: 3\nI0811 00:50:31.865525 3189 log.go:181] (0xc000fa3290) (0xc0004e5720) Stream removed, broadcasting: 5\n" Aug 11 00:50:31.871: INFO: stdout: "" Aug 11 00:50:31.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-8809 execpod-affinity46hdn -- /bin/sh -x -c nc -zv -t -w 2 10.103.63.123 80' Aug 11 00:50:32.093: INFO: stderr: "I0811 00:50:32.013017 3207 log.go:181] (0xc0006576b0) (0xc000a27540) Create stream\nI0811 00:50:32.013122 3207 log.go:181] (0xc0006576b0) (0xc000a27540) Stream added, broadcasting: 1\nI0811 00:50:32.016333 3207 log.go:181] (0xc0006576b0) Reply frame received for 1\nI0811 00:50:32.016379 3207 log.go:181] (0xc0006576b0) (0xc00087edc0) Create stream\nI0811 00:50:32.016392 3207 log.go:181] (0xc0006576b0) (0xc00087edc0) Stream added, broadcasting: 3\nI0811 00:50:32.017441 3207 log.go:181] (0xc0006576b0) Reply frame received for 3\nI0811 00:50:32.017481 3207 log.go:181] (0xc0006576b0) (0xc0009a20a0) Create stream\nI0811 00:50:32.017498 3207 log.go:181] (0xc0006576b0) (0xc0009a20a0) Stream added, broadcasting: 5\nI0811 00:50:32.018573 3207 log.go:181] (0xc0006576b0) Reply frame received for 5\nI0811 00:50:32.085637 3207 log.go:181] (0xc0006576b0) Data frame received for 5\nI0811 00:50:32.085696 3207 log.go:181] (0xc0006576b0) Data frame received for 3\nI0811 00:50:32.085726 3207 log.go:181] (0xc00087edc0) (3) Data frame handling\nI0811 00:50:32.085762 3207 log.go:181] (0xc0009a20a0) (5) Data frame handling\nI0811 00:50:32.085802 3207 log.go:181] (0xc0009a20a0) (5) Data frame sent\nI0811 00:50:32.085818 3207 log.go:181] (0xc0006576b0) Data frame received for 5\nI0811 00:50:32.085826 3207 log.go:181] (0xc0009a20a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.63.123 80\nConnection to 10.103.63.123 80 port [tcp/http] succeeded!\nI0811 00:50:32.087012 3207 log.go:181] (0xc0006576b0) Data frame received for 1\nI0811 00:50:32.087032 3207 log.go:181] (0xc000a27540) (1) Data frame handling\nI0811 00:50:32.087054 3207 log.go:181] (0xc000a27540) (1) Data frame sent\nI0811 00:50:32.087068 3207 log.go:181] (0xc0006576b0) (0xc000a27540) Stream removed, broadcasting: 1\nI0811 00:50:32.087085 3207 log.go:181] (0xc0006576b0) Go away received\nI0811 00:50:32.087516 3207 log.go:181] (0xc0006576b0) (0xc000a27540) Stream removed, broadcasting: 1\nI0811 00:50:32.087536 3207 log.go:181] (0xc0006576b0) (0xc00087edc0) Stream removed, broadcasting: 3\nI0811 00:50:32.087546 3207 log.go:181] (0xc0006576b0) (0xc0009a20a0) Stream removed, broadcasting: 5\n" Aug 11 00:50:32.093: INFO: stdout: "" Aug 11 00:50:32.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-8809 execpod-affinity46hdn -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.103.63.123:80/ ; done' Aug 11 00:50:32.405: INFO: stderr: "I0811 00:50:32.231910 3225 log.go:181] (0xc000c2e0b0) (0xc0001ff400) Create stream\nI0811 00:50:32.232015 3225 log.go:181] (0xc000c2e0b0) (0xc0001ff400) Stream added, broadcasting: 1\nI0811 00:50:32.233838 3225 log.go:181] (0xc000c2e0b0) Reply frame received for 1\nI0811 00:50:32.233869 3225 log.go:181] (0xc000c2e0b0) (0xc000427b80) Create stream\nI0811 00:50:32.233879 3225 log.go:181] (0xc000c2e0b0) (0xc000427b80) Stream added, broadcasting: 3\nI0811 00:50:32.234772 3225 log.go:181] (0xc000c2e0b0) Reply frame received for 3\nI0811 00:50:32.234802 3225 log.go:181] (0xc000c2e0b0) (0xc000032000) Create stream\nI0811 00:50:32.234810 3225 log.go:181] (0xc000c2e0b0) (0xc000032000) Stream added, broadcasting: 5\nI0811 00:50:32.235661 3225 log.go:181] (0xc000c2e0b0) Reply frame received for 5\nI0811 00:50:32.298024 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.298081 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.298108 3225 log.go:181] (0xc000c2e0b0) Data frame received for 5\nI0811 00:50:32.298135 3225 log.go:181] (0xc000032000) (5) Data frame handling\nI0811 00:50:32.298161 3225 log.go:181] (0xc000032000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.63.123:80/\nI0811 00:50:32.298186 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.302157 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.302190 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.302230 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.302812 3225 log.go:181] (0xc000c2e0b0) Data frame received for 5\nI0811 00:50:32.302882 3225 log.go:181] (0xc000032000) (5) Data frame handling\nI0811 00:50:32.302896 3225 log.go:181] (0xc000032000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.63.123:80/\nI0811 00:50:32.302909 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.302915 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.302921 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.309814 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.309844 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.309868 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.310543 3225 log.go:181] (0xc000c2e0b0) Data frame received for 5\nI0811 00:50:32.310564 3225 log.go:181] (0xc000032000) (5) Data frame handling\nI0811 00:50:32.310582 3225 log.go:181] (0xc000032000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.63.123:80/\nI0811 00:50:32.310643 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.310664 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.310678 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.314900 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.314920 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.314937 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.315492 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.315507 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.315516 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.315528 3225 log.go:181] (0xc000c2e0b0) Data frame received for 5\nI0811 00:50:32.315533 3225 log.go:181] (0xc000032000) (5) Data frame handling\nI0811 00:50:32.315539 3225 log.go:181] (0xc000032000) (5) Data frame sent\nI0811 00:50:32.315544 3225 log.go:181] (0xc000c2e0b0) Data frame received for 5\nI0811 00:50:32.315549 3225 log.go:181] (0xc000032000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.63.123:80/\nI0811 00:50:32.315562 3225 log.go:181] (0xc000032000) (5) Data frame sent\nI0811 00:50:32.321263 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.321280 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.321297 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.321959 3225 log.go:181] (0xc000c2e0b0) Data frame received for 5\nI0811 00:50:32.321980 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.321996 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.322005 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.322019 3225 log.go:181] (0xc000032000) (5) Data frame handling\nI0811 00:50:32.322026 3225 log.go:181] (0xc000032000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.63.123:80/\nI0811 00:50:32.327141 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.327157 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.327171 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.327528 3225 log.go:181] (0xc000c2e0b0) Data frame received for 5\nI0811 00:50:32.327545 3225 log.go:181] (0xc000032000) (5) Data frame handling\nI0811 00:50:32.327553 3225 log.go:181] (0xc000032000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.63.123:80/\nI0811 00:50:32.327570 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.327591 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.327605 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.330982 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.331002 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.331019 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.331449 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.331464 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.331470 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.331478 3225 log.go:181] (0xc000c2e0b0) Data frame received for 5\nI0811 00:50:32.331483 3225 log.go:181] (0xc000032000) (5) Data frame handling\nI0811 00:50:32.331487 3225 log.go:181] (0xc000032000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.63.123:80/\nI0811 00:50:32.340174 3225 log.go:181] (0xc000c2e0b0) Data frame received for 5\nI0811 00:50:32.340193 3225 log.go:181] (0xc000032000) (5) Data frame handling\nI0811 00:50:32.340213 3225 log.go:181] (0xc000032000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.63.123:80/\nI0811 00:50:32.340280 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.340301 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.340319 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.345334 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.345366 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.345388 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.345815 3225 log.go:181] (0xc000c2e0b0) Data frame received for 5\nI0811 00:50:32.345834 3225 log.go:181] (0xc000032000) (5) Data frame handling\nI0811 00:50:32.345847 3225 log.go:181] (0xc000032000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.63.123:80/\nI0811 00:50:32.345988 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.346002 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.346016 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.350831 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.350855 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.350871 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.351475 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.351489 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.351503 3225 log.go:181] (0xc000c2e0b0) Data frame received for 5\nI0811 00:50:32.351527 3225 log.go:181] (0xc000032000) (5) Data frame handling\nI0811 00:50:32.351539 3225 log.go:181] (0xc000032000) (5) Data frame sent\nI0811 00:50:32.351553 3225 log.go:181] (0xc000427b80) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.63.123:80/\nI0811 00:50:32.356055 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.356072 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.356080 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.356546 3225 log.go:181] (0xc000c2e0b0) Data frame received for 5\nI0811 00:50:32.356562 3225 log.go:181] (0xc000032000) (5) Data frame handling\nI0811 00:50:32.356575 3225 log.go:181] (0xc000032000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.63.123:80/\nI0811 00:50:32.356596 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.356619 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.356631 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.362584 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.362608 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.362621 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.363233 3225 log.go:181] (0xc000c2e0b0) Data frame received for 5\nI0811 00:50:32.363264 3225 log.go:181] (0xc000032000) (5) Data frame handling\nI0811 00:50:32.363283 3225 log.go:181] (0xc000032000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.63.123:80/\nI0811 00:50:32.363309 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.363317 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.363336 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.368228 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.368256 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.368285 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.368843 3225 log.go:181] (0xc000c2e0b0) Data frame received for 5\nI0811 00:50:32.368861 3225 log.go:181] (0xc000032000) (5) Data frame handling\nI0811 00:50:32.368870 3225 log.go:181] (0xc000032000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.63.123:80/\nI0811 00:50:32.368896 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.368930 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.368967 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.374718 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.374747 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.374783 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.375415 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.375441 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.375467 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.375484 3225 log.go:181] (0xc000c2e0b0) Data frame received for 5\nI0811 00:50:32.375496 3225 log.go:181] (0xc000032000) (5) Data frame handling\nI0811 00:50:32.375509 3225 log.go:181] (0xc000032000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.63.123:80/\nI0811 00:50:32.382538 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.382567 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.382595 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.383347 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.383368 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.383392 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.383406 3225 log.go:181] (0xc000c2e0b0) Data frame received for 5\nI0811 00:50:32.383417 3225 log.go:181] (0xc000032000) (5) Data frame handling\nI0811 00:50:32.383433 3225 log.go:181] (0xc000032000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.63.123:80/\nI0811 00:50:32.388467 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.388498 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.388522 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.389017 3225 log.go:181] (0xc000c2e0b0) Data frame received for 5\nI0811 00:50:32.389036 3225 log.go:181] (0xc000032000) (5) Data frame handling\nI0811 00:50:32.389048 3225 log.go:181] (0xc000032000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.63.123:80/\nI0811 00:50:32.389200 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.389218 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.389232 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.396048 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.396070 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.396089 3225 log.go:181] (0xc000427b80) (3) Data frame sent\nI0811 00:50:32.396905 3225 log.go:181] (0xc000c2e0b0) Data frame received for 5\nI0811 00:50:32.396943 3225 log.go:181] (0xc000032000) (5) Data frame handling\nI0811 00:50:32.397192 3225 log.go:181] (0xc000c2e0b0) Data frame received for 3\nI0811 00:50:32.397227 3225 log.go:181] (0xc000427b80) (3) Data frame handling\nI0811 00:50:32.398850 3225 log.go:181] (0xc000c2e0b0) Data frame received for 1\nI0811 00:50:32.398879 3225 log.go:181] (0xc0001ff400) (1) Data frame handling\nI0811 00:50:32.398901 3225 log.go:181] (0xc0001ff400) (1) Data frame sent\nI0811 00:50:32.399030 3225 log.go:181] (0xc000c2e0b0) (0xc0001ff400) Stream removed, broadcasting: 1\nI0811 00:50:32.399219 3225 log.go:181] (0xc000c2e0b0) Go away received\nI0811 00:50:32.399352 3225 log.go:181] (0xc000c2e0b0) (0xc0001ff400) Stream removed, broadcasting: 1\nI0811 00:50:32.399369 3225 log.go:181] (0xc000c2e0b0) (0xc000427b80) Stream removed, broadcasting: 3\nI0811 00:50:32.399378 3225 log.go:181] (0xc000c2e0b0) (0xc000032000) Stream removed, broadcasting: 5\n" Aug 11 00:50:32.406: INFO: stdout: "\naffinity-clusterip-5btvg\naffinity-clusterip-5btvg\naffinity-clusterip-5btvg\naffinity-clusterip-5btvg\naffinity-clusterip-5btvg\naffinity-clusterip-5btvg\naffinity-clusterip-5btvg\naffinity-clusterip-5btvg\naffinity-clusterip-5btvg\naffinity-clusterip-5btvg\naffinity-clusterip-5btvg\naffinity-clusterip-5btvg\naffinity-clusterip-5btvg\naffinity-clusterip-5btvg\naffinity-clusterip-5btvg\naffinity-clusterip-5btvg" Aug 11 00:50:32.406: INFO: Received response from host: affinity-clusterip-5btvg Aug 11 00:50:32.406: INFO: Received response from host: affinity-clusterip-5btvg Aug 11 00:50:32.406: INFO: Received response from host: affinity-clusterip-5btvg Aug 11 00:50:32.406: INFO: Received response from host: affinity-clusterip-5btvg Aug 11 00:50:32.406: INFO: Received response from host: affinity-clusterip-5btvg Aug 11 00:50:32.406: INFO: Received response from host: affinity-clusterip-5btvg Aug 11 00:50:32.406: INFO: Received response from host: affinity-clusterip-5btvg Aug 11 00:50:32.406: INFO: Received response from host: affinity-clusterip-5btvg Aug 11 00:50:32.406: INFO: Received response from host: affinity-clusterip-5btvg Aug 11 00:50:32.406: INFO: Received response from host: affinity-clusterip-5btvg Aug 11 00:50:32.406: INFO: Received response from host: affinity-clusterip-5btvg Aug 11 00:50:32.406: INFO: Received response from host: affinity-clusterip-5btvg Aug 11 00:50:32.406: INFO: Received response from host: affinity-clusterip-5btvg Aug 11 00:50:32.406: INFO: Received response from host: affinity-clusterip-5btvg Aug 11 00:50:32.406: INFO: Received response from host: affinity-clusterip-5btvg Aug 11 00:50:32.406: INFO: Received response from host: affinity-clusterip-5btvg Aug 11 00:50:32.406: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-8809, will wait for the garbage collector to delete the pods Aug 11 00:50:32.516: INFO: Deleting ReplicationController affinity-clusterip took: 6.306223ms Aug 11 00:50:32.916: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.219487ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:50:43.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8809" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:23.642 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":248,"skipped":4098,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:50:43.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Aug 11 00:50:44.031: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7543 /api/v1/namespaces/watch-7543/configmaps/e2e-watch-test-configmap-a bbe3776b-dfaf-4639-98f5-915860b01ad1 6062202 0 2020-08-11 00:50:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-11 00:50:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 11 00:50:44.031: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7543 /api/v1/namespaces/watch-7543/configmaps/e2e-watch-test-configmap-a bbe3776b-dfaf-4639-98f5-915860b01ad1 6062202 0 2020-08-11 00:50:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-11 00:50:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Aug 11 00:50:54.043: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7543 /api/v1/namespaces/watch-7543/configmaps/e2e-watch-test-configmap-a bbe3776b-dfaf-4639-98f5-915860b01ad1 6062260 0 2020-08-11 00:50:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-11 00:50:54 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 11 00:50:54.043: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7543 /api/v1/namespaces/watch-7543/configmaps/e2e-watch-test-configmap-a bbe3776b-dfaf-4639-98f5-915860b01ad1 6062260 0 2020-08-11 00:50:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-11 00:50:54 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Aug 11 00:51:04.051: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7543 /api/v1/namespaces/watch-7543/configmaps/e2e-watch-test-configmap-a bbe3776b-dfaf-4639-98f5-915860b01ad1 6062290 0 2020-08-11 00:50:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-11 00:51:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 11 00:51:04.051: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7543 /api/v1/namespaces/watch-7543/configmaps/e2e-watch-test-configmap-a bbe3776b-dfaf-4639-98f5-915860b01ad1 6062290 0 2020-08-11 00:50:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-11 00:51:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Aug 11 00:51:14.059: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7543 /api/v1/namespaces/watch-7543/configmaps/e2e-watch-test-configmap-a bbe3776b-dfaf-4639-98f5-915860b01ad1 6062320 0 2020-08-11 00:50:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-11 00:51:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 11 00:51:14.059: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7543 /api/v1/namespaces/watch-7543/configmaps/e2e-watch-test-configmap-a bbe3776b-dfaf-4639-98f5-915860b01ad1 6062320 0 2020-08-11 00:50:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-11 00:51:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Aug 11 00:51:24.067: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7543 /api/v1/namespaces/watch-7543/configmaps/e2e-watch-test-configmap-b 9190a90a-49df-4a80-8689-e6c7ccff0565 6062350 0 2020-08-11 00:51:24 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-11 00:51:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 11 00:51:24.068: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7543 /api/v1/namespaces/watch-7543/configmaps/e2e-watch-test-configmap-b 9190a90a-49df-4a80-8689-e6c7ccff0565 6062350 0 2020-08-11 00:51:24 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-11 00:51:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Aug 11 00:51:34.075: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7543 /api/v1/namespaces/watch-7543/configmaps/e2e-watch-test-configmap-b 9190a90a-49df-4a80-8689-e6c7ccff0565 6062380 0 2020-08-11 00:51:24 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-11 00:51:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 11 00:51:34.076: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7543 /api/v1/namespaces/watch-7543/configmaps/e2e-watch-test-configmap-b 9190a90a-49df-4a80-8689-e6c7ccff0565 6062380 0 2020-08-11 00:51:24 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-11 00:51:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:51:44.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7543" for this suite. • [SLOW TEST:60.128 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":303,"completed":249,"skipped":4100,"failed":0} [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:51:44.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4211 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4211 I0811 00:51:44.301771 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4211, replica count: 2 I0811 00:51:47.352148 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0811 00:51:50.352427 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 11 00:51:50.352: INFO: Creating new exec pod Aug 11 00:51:55.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-4211 execpodftg86 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 11 00:51:55.614: INFO: stderr: "I0811 00:51:55.505505 3243 log.go:181] (0xc000eaba20) (0xc0002406e0) Create stream\nI0811 00:51:55.505564 3243 log.go:181] (0xc000eaba20) (0xc0002406e0) Stream added, broadcasting: 1\nI0811 00:51:55.508399 3243 log.go:181] (0xc000eaba20) Reply frame received for 1\nI0811 00:51:55.508449 3243 log.go:181] (0xc000eaba20) (0xc000240e60) Create stream\nI0811 00:51:55.508461 3243 log.go:181] (0xc000eaba20) (0xc000240e60) Stream added, broadcasting: 3\nI0811 00:51:55.509713 3243 log.go:181] (0xc000eaba20) Reply frame received for 3\nI0811 00:51:55.509758 3243 log.go:181] (0xc000eaba20) (0xc00052c280) Create stream\nI0811 00:51:55.509785 3243 log.go:181] (0xc000eaba20) (0xc00052c280) Stream added, broadcasting: 5\nI0811 00:51:55.510824 3243 log.go:181] (0xc000eaba20) Reply frame received for 5\nI0811 00:51:55.605741 3243 log.go:181] (0xc000eaba20) Data frame received for 5\nI0811 00:51:55.605782 3243 log.go:181] (0xc00052c280) (5) Data frame handling\nI0811 00:51:55.605807 3243 log.go:181] (0xc00052c280) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0811 00:51:55.606182 3243 log.go:181] (0xc000eaba20) Data frame received for 5\nI0811 00:51:55.606219 3243 log.go:181] (0xc00052c280) (5) Data frame handling\nI0811 00:51:55.606246 3243 log.go:181] (0xc00052c280) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0811 00:51:55.606592 3243 log.go:181] (0xc000eaba20) Data frame received for 5\nI0811 00:51:55.606615 3243 log.go:181] (0xc00052c280) (5) Data frame handling\nI0811 00:51:55.606814 3243 log.go:181] (0xc000eaba20) Data frame received for 3\nI0811 00:51:55.606845 3243 log.go:181] (0xc000240e60) (3) Data frame handling\nI0811 00:51:55.608552 3243 log.go:181] (0xc000eaba20) Data frame received for 1\nI0811 00:51:55.608571 3243 log.go:181] (0xc0002406e0) (1) Data frame handling\nI0811 00:51:55.608592 3243 log.go:181] (0xc0002406e0) (1) Data frame sent\nI0811 00:51:55.608609 3243 log.go:181] (0xc000eaba20) (0xc0002406e0) Stream removed, broadcasting: 1\nI0811 00:51:55.608827 3243 log.go:181] (0xc000eaba20) Go away received\nI0811 00:51:55.609089 3243 log.go:181] (0xc000eaba20) (0xc0002406e0) Stream removed, broadcasting: 1\nI0811 00:51:55.609124 3243 log.go:181] (0xc000eaba20) (0xc000240e60) Stream removed, broadcasting: 3\nI0811 00:51:55.609139 3243 log.go:181] (0xc000eaba20) (0xc00052c280) Stream removed, broadcasting: 5\n" Aug 11 00:51:55.614: INFO: stdout: "" Aug 11 00:51:55.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-4211 execpodftg86 -- /bin/sh -x -c nc -zv -t -w 2 10.111.251.54 80' Aug 11 00:51:55.817: INFO: stderr: "I0811 00:51:55.747880 3259 log.go:181] (0xc000c81130) (0xc000e1a3c0) Create stream\nI0811 00:51:55.747923 3259 log.go:181] (0xc000c81130) (0xc000e1a3c0) Stream added, broadcasting: 1\nI0811 00:51:55.753070 3259 log.go:181] (0xc000c81130) Reply frame received for 1\nI0811 00:51:55.753105 3259 log.go:181] (0xc000c81130) (0xc000ae7220) Create stream\nI0811 00:51:55.753115 3259 log.go:181] (0xc000c81130) (0xc000ae7220) Stream added, broadcasting: 3\nI0811 00:51:55.754125 3259 log.go:181] (0xc000c81130) Reply frame received for 3\nI0811 00:51:55.754153 3259 log.go:181] (0xc000c81130) (0xc000ae0500) Create stream\nI0811 00:51:55.754164 3259 log.go:181] (0xc000c81130) (0xc000ae0500) Stream added, broadcasting: 5\nI0811 00:51:55.754833 3259 log.go:181] (0xc000c81130) Reply frame received for 5\nI0811 00:51:55.807772 3259 log.go:181] (0xc000c81130) Data frame received for 5\nI0811 00:51:55.807810 3259 log.go:181] (0xc000ae0500) (5) Data frame handling\nI0811 00:51:55.807843 3259 log.go:181] (0xc000ae0500) (5) Data frame sent\nI0811 00:51:55.807857 3259 log.go:181] (0xc000c81130) Data frame received for 5\nI0811 00:51:55.807870 3259 log.go:181] (0xc000ae0500) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.251.54 80\nConnection to 10.111.251.54 80 port [tcp/http] succeeded!\nI0811 00:51:55.808227 3259 log.go:181] (0xc000c81130) Data frame received for 3\nI0811 00:51:55.808346 3259 log.go:181] (0xc000ae7220) (3) Data frame handling\nI0811 00:51:55.811059 3259 log.go:181] (0xc000c81130) Data frame received for 1\nI0811 00:51:55.811097 3259 log.go:181] (0xc000e1a3c0) (1) Data frame handling\nI0811 00:51:55.811117 3259 log.go:181] (0xc000e1a3c0) (1) Data frame sent\nI0811 00:51:55.811137 3259 log.go:181] (0xc000c81130) (0xc000e1a3c0) Stream removed, broadcasting: 1\nI0811 00:51:55.811160 3259 log.go:181] (0xc000c81130) Go away received\nI0811 00:51:55.811831 3259 log.go:181] (0xc000c81130) (0xc000e1a3c0) Stream removed, broadcasting: 1\nI0811 00:51:55.811865 3259 log.go:181] (0xc000c81130) (0xc000ae7220) Stream removed, broadcasting: 3\nI0811 00:51:55.811885 3259 log.go:181] (0xc000c81130) (0xc000ae0500) Stream removed, broadcasting: 5\n" Aug 11 00:51:55.817: INFO: stdout: "" Aug 11 00:51:55.817: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:51:55.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4211" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:11.780 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":303,"completed":250,"skipped":4100,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:51:55.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 11 00:52:04.036: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 11 00:52:04.128: INFO: Pod pod-with-prestop-exec-hook still exists Aug 11 00:52:06.129: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 11 00:52:06.133: INFO: Pod pod-with-prestop-exec-hook still exists Aug 11 00:52:08.129: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 11 00:52:08.132: INFO: Pod pod-with-prestop-exec-hook still exists Aug 11 00:52:10.129: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 11 00:52:10.133: INFO: Pod pod-with-prestop-exec-hook still exists Aug 11 00:52:12.129: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 11 00:52:12.132: INFO: Pod pod-with-prestop-exec-hook still exists Aug 11 00:52:14.129: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 11 00:52:14.133: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:52:14.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3283" for this suite. • [SLOW TEST:18.300 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":303,"completed":251,"skipped":4138,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:52:14.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 11 00:52:14.782: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 11 00:52:16.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703934, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703934, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703934, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703934, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 11 00:52:19.907: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:52:19.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:52:21.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1611" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.151 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":303,"completed":252,"skipped":4146,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:52:21.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:52:52.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7395" for this suite. STEP: Destroying namespace "nsdeletetest-8736" for this suite. Aug 11 00:52:52.645: INFO: Namespace nsdeletetest-8736 was already deleted STEP: Destroying namespace "nsdeletetest-4492" for this suite. • [SLOW TEST:31.332 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":303,"completed":253,"skipped":4155,"failed":0} SSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:52:52.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Aug 11 00:52:52.713: INFO: created test-podtemplate-1 Aug 11 00:52:52.729: INFO: created test-podtemplate-2 Aug 11 00:52:52.741: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Aug 11 00:52:52.747: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Aug 11 00:52:52.768: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:52:52.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-3575" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":303,"completed":254,"skipped":4162,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:52:52.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-05e441ae-25b7-4bce-8125-b58ef7ba381b STEP: Creating a pod to test consume secrets Aug 11 00:52:52.899: INFO: Waiting up to 5m0s for pod "pod-secrets-3b0d66b7-146a-49b1-a40b-39014f7aa004" in namespace "secrets-7120" to be "Succeeded or Failed" Aug 11 00:52:52.909: INFO: Pod "pod-secrets-3b0d66b7-146a-49b1-a40b-39014f7aa004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.227317ms Aug 11 00:52:54.913: INFO: Pod "pod-secrets-3b0d66b7-146a-49b1-a40b-39014f7aa004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014348675s Aug 11 00:52:56.917: INFO: Pod "pod-secrets-3b0d66b7-146a-49b1-a40b-39014f7aa004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018366796s Aug 11 00:52:58.922: INFO: Pod "pod-secrets-3b0d66b7-146a-49b1-a40b-39014f7aa004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022865593s STEP: Saw pod success Aug 11 00:52:58.922: INFO: Pod "pod-secrets-3b0d66b7-146a-49b1-a40b-39014f7aa004" satisfied condition "Succeeded or Failed" Aug 11 00:52:58.925: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-3b0d66b7-146a-49b1-a40b-39014f7aa004 container secret-volume-test: STEP: delete the pod Aug 11 00:52:58.960: INFO: Waiting for pod pod-secrets-3b0d66b7-146a-49b1-a40b-39014f7aa004 to disappear Aug 11 00:52:58.984: INFO: Pod pod-secrets-3b0d66b7-146a-49b1-a40b-39014f7aa004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:52:58.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7120" for this suite. • [SLOW TEST:6.213 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":255,"skipped":4172,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:52:58.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 11 00:52:59.788: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 11 00:53:01.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703979, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703979, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703979, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732703979, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 11 00:53:04.831: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Aug 11 00:53:08.875: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config attach --namespace=webhook-3445 to-be-attached-pod -i -c=container1' Aug 11 00:53:09.002: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:53:09.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3445" for this suite. STEP: Destroying namespace "webhook-3445-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.069 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":303,"completed":256,"skipped":4178,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:53:09.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Aug 11 00:53:15.197: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9100 PodName:pod-sharedvolume-98a047c1-dc5e-4373-9a62-e4cad2f1cb07 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:53:15.197: INFO: >>> kubeConfig: /root/.kube/config I0811 00:53:15.327767 7 log.go:181] (0xc000124c60) (0xc007016960) Create stream I0811 00:53:15.327800 7 log.go:181] (0xc000124c60) (0xc007016960) Stream added, broadcasting: 1 I0811 00:53:15.329619 7 log.go:181] (0xc000124c60) Reply frame received for 1 I0811 00:53:15.329676 7 log.go:181] (0xc000124c60) (0xc007016a00) Create stream I0811 00:53:15.329698 7 log.go:181] (0xc000124c60) (0xc007016a00) Stream added, broadcasting: 3 I0811 00:53:15.330741 7 log.go:181] (0xc000124c60) Reply frame received for 3 I0811 00:53:15.330768 7 log.go:181] (0xc000124c60) (0xc0005a9040) Create stream I0811 00:53:15.330790 7 log.go:181] (0xc000124c60) (0xc0005a9040) Stream added, broadcasting: 5 I0811 00:53:15.331507 7 log.go:181] (0xc000124c60) Reply frame received for 5 I0811 00:53:15.410294 7 log.go:181] (0xc000124c60) Data frame received for 3 I0811 00:53:15.410344 7 log.go:181] (0xc007016a00) (3) Data frame handling I0811 00:53:15.410361 7 log.go:181] (0xc007016a00) (3) Data frame sent I0811 00:53:15.410371 7 log.go:181] (0xc000124c60) Data frame received for 3 I0811 00:53:15.410379 7 log.go:181] (0xc007016a00) (3) Data frame handling I0811 00:53:15.410403 7 log.go:181] (0xc000124c60) Data frame received for 5 I0811 00:53:15.410414 7 log.go:181] (0xc0005a9040) (5) Data frame handling I0811 00:53:15.412300 7 log.go:181] (0xc000124c60) Data frame received for 1 I0811 00:53:15.412357 7 log.go:181] (0xc007016960) (1) Data frame handling I0811 00:53:15.412466 7 log.go:181] (0xc007016960) (1) Data frame sent I0811 00:53:15.412490 7 log.go:181] (0xc000124c60) (0xc007016960) Stream removed, broadcasting: 1 I0811 00:53:15.412506 7 log.go:181] (0xc000124c60) Go away received I0811 00:53:15.412615 7 log.go:181] (0xc000124c60) (0xc007016960) Stream removed, broadcasting: 1 I0811 00:53:15.412640 7 log.go:181] (0xc000124c60) (0xc007016a00) Stream removed, broadcasting: 3 I0811 00:53:15.412659 7 log.go:181] (0xc000124c60) (0xc0005a9040) Stream removed, broadcasting: 5 Aug 11 00:53:15.412: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:53:15.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9100" for this suite. • [SLOW TEST:6.359 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":303,"completed":257,"skipped":4184,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:53:15.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 11 00:53:20.006: INFO: Successfully updated pod "pod-update-314a77d3-f656-42b8-9141-88aff6ba018e" STEP: verifying the updated pod is in kubernetes Aug 11 00:53:20.031: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:53:20.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-501" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":303,"completed":258,"skipped":4214,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:53:20.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6012.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6012.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6012.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6012.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 11 00:53:28.266: INFO: DNS probes using dns-test-8f49096c-3d4e-433c-8a5a-710c5f5536ec succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6012.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6012.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6012.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6012.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 11 00:53:34.425: INFO: File wheezy_udp@dns-test-service-3.dns-6012.svc.cluster.local from pod dns-6012/dns-test-e0edd95c-855f-420a-bca5-2bab619d68ba contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 00:53:34.453: INFO: File jessie_udp@dns-test-service-3.dns-6012.svc.cluster.local from pod dns-6012/dns-test-e0edd95c-855f-420a-bca5-2bab619d68ba contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 00:53:34.453: INFO: Lookups using dns-6012/dns-test-e0edd95c-855f-420a-bca5-2bab619d68ba failed for: [wheezy_udp@dns-test-service-3.dns-6012.svc.cluster.local jessie_udp@dns-test-service-3.dns-6012.svc.cluster.local] Aug 11 00:53:39.457: INFO: File wheezy_udp@dns-test-service-3.dns-6012.svc.cluster.local from pod dns-6012/dns-test-e0edd95c-855f-420a-bca5-2bab619d68ba contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 00:53:39.461: INFO: File jessie_udp@dns-test-service-3.dns-6012.svc.cluster.local from pod dns-6012/dns-test-e0edd95c-855f-420a-bca5-2bab619d68ba contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 00:53:39.461: INFO: Lookups using dns-6012/dns-test-e0edd95c-855f-420a-bca5-2bab619d68ba failed for: [wheezy_udp@dns-test-service-3.dns-6012.svc.cluster.local jessie_udp@dns-test-service-3.dns-6012.svc.cluster.local] Aug 11 00:53:44.458: INFO: File wheezy_udp@dns-test-service-3.dns-6012.svc.cluster.local from pod dns-6012/dns-test-e0edd95c-855f-420a-bca5-2bab619d68ba contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 00:53:44.461: INFO: File jessie_udp@dns-test-service-3.dns-6012.svc.cluster.local from pod dns-6012/dns-test-e0edd95c-855f-420a-bca5-2bab619d68ba contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 00:53:44.461: INFO: Lookups using dns-6012/dns-test-e0edd95c-855f-420a-bca5-2bab619d68ba failed for: [wheezy_udp@dns-test-service-3.dns-6012.svc.cluster.local jessie_udp@dns-test-service-3.dns-6012.svc.cluster.local] Aug 11 00:53:49.496: INFO: File wheezy_udp@dns-test-service-3.dns-6012.svc.cluster.local from pod dns-6012/dns-test-e0edd95c-855f-420a-bca5-2bab619d68ba contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 00:53:49.500: INFO: File jessie_udp@dns-test-service-3.dns-6012.svc.cluster.local from pod dns-6012/dns-test-e0edd95c-855f-420a-bca5-2bab619d68ba contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 00:53:49.500: INFO: Lookups using dns-6012/dns-test-e0edd95c-855f-420a-bca5-2bab619d68ba failed for: [wheezy_udp@dns-test-service-3.dns-6012.svc.cluster.local jessie_udp@dns-test-service-3.dns-6012.svc.cluster.local] Aug 11 00:53:54.457: INFO: File wheezy_udp@dns-test-service-3.dns-6012.svc.cluster.local from pod dns-6012/dns-test-e0edd95c-855f-420a-bca5-2bab619d68ba contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 00:53:54.460: INFO: File jessie_udp@dns-test-service-3.dns-6012.svc.cluster.local from pod dns-6012/dns-test-e0edd95c-855f-420a-bca5-2bab619d68ba contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 00:53:54.460: INFO: Lookups using dns-6012/dns-test-e0edd95c-855f-420a-bca5-2bab619d68ba failed for: [wheezy_udp@dns-test-service-3.dns-6012.svc.cluster.local jessie_udp@dns-test-service-3.dns-6012.svc.cluster.local] Aug 11 00:53:59.469: INFO: DNS probes using dns-test-e0edd95c-855f-420a-bca5-2bab619d68ba succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6012.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6012.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6012.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6012.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 11 00:54:06.360: INFO: DNS probes using dns-test-fbafecbe-0832-44f4-a47c-fa8a9c9d4e1e succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:54:06.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6012" for this suite. • [SLOW TEST:46.436 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":303,"completed":259,"skipped":4221,"failed":0} SSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:54:06.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5185 Aug 11 00:54:10.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-5185 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Aug 11 00:54:11.126: INFO: stderr: "I0811 00:54:11.045995 3295 log.go:181] (0xc000b56dc0) (0xc000dca5a0) Create stream\nI0811 00:54:11.046073 3295 log.go:181] (0xc000b56dc0) (0xc000dca5a0) Stream added, broadcasting: 1\nI0811 00:54:11.050684 3295 log.go:181] (0xc000b56dc0) Reply frame received for 1\nI0811 00:54:11.050727 3295 log.go:181] (0xc000b56dc0) (0xc000a6f220) Create stream\nI0811 00:54:11.050738 3295 log.go:181] (0xc000b56dc0) (0xc000a6f220) Stream added, broadcasting: 3\nI0811 00:54:11.051570 3295 log.go:181] (0xc000b56dc0) Reply frame received for 3\nI0811 00:54:11.051609 3295 log.go:181] (0xc000b56dc0) (0xc00051c280) Create stream\nI0811 00:54:11.051621 3295 log.go:181] (0xc000b56dc0) (0xc00051c280) Stream added, broadcasting: 5\nI0811 00:54:11.052386 3295 log.go:181] (0xc000b56dc0) Reply frame received for 5\nI0811 00:54:11.115113 3295 log.go:181] (0xc000b56dc0) Data frame received for 5\nI0811 00:54:11.115135 3295 log.go:181] (0xc00051c280) (5) Data frame handling\nI0811 00:54:11.115146 3295 log.go:181] (0xc00051c280) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0811 00:54:11.118325 3295 log.go:181] (0xc000b56dc0) Data frame received for 3\nI0811 00:54:11.118346 3295 log.go:181] (0xc000a6f220) (3) Data frame handling\nI0811 00:54:11.118358 3295 log.go:181] (0xc000a6f220) (3) Data frame sent\nI0811 00:54:11.119107 3295 log.go:181] (0xc000b56dc0) Data frame received for 5\nI0811 00:54:11.119134 3295 log.go:181] (0xc00051c280) (5) Data frame handling\nI0811 00:54:11.119154 3295 log.go:181] (0xc000b56dc0) Data frame received for 3\nI0811 00:54:11.119167 3295 log.go:181] (0xc000a6f220) (3) Data frame handling\nI0811 00:54:11.120934 3295 log.go:181] (0xc000b56dc0) Data frame received for 1\nI0811 00:54:11.120961 3295 log.go:181] (0xc000dca5a0) (1) Data frame handling\nI0811 00:54:11.120999 3295 log.go:181] (0xc000dca5a0) (1) Data frame sent\nI0811 00:54:11.121025 3295 log.go:181] (0xc000b56dc0) (0xc000dca5a0) Stream removed, broadcasting: 1\nI0811 00:54:11.121050 3295 log.go:181] (0xc000b56dc0) Go away received\nI0811 00:54:11.121520 3295 log.go:181] (0xc000b56dc0) (0xc000dca5a0) Stream removed, broadcasting: 1\nI0811 00:54:11.121547 3295 log.go:181] (0xc000b56dc0) (0xc000a6f220) Stream removed, broadcasting: 3\nI0811 00:54:11.121559 3295 log.go:181] (0xc000b56dc0) (0xc00051c280) Stream removed, broadcasting: 5\n" Aug 11 00:54:11.126: INFO: stdout: "iptables" Aug 11 00:54:11.126: INFO: proxyMode: iptables Aug 11 00:54:11.130: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 11 00:54:11.137: INFO: Pod kube-proxy-mode-detector still exists Aug 11 00:54:13.138: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 11 00:54:13.184: INFO: Pod kube-proxy-mode-detector still exists Aug 11 00:54:15.138: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 11 00:54:15.141: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-5185 STEP: creating replication controller affinity-clusterip-timeout in namespace services-5185 I0811 00:54:15.187898 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-5185, replica count: 3 I0811 00:54:18.238300 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0811 00:54:21.238593 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 11 00:54:21.247: INFO: Creating new exec pod Aug 11 00:54:26.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-5185 execpod-affinity8gpkr -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Aug 11 00:54:26.500: INFO: stderr: "I0811 00:54:26.397450 3313 log.go:181] (0xc000c8adc0) (0xc0008d5860) Create stream\nI0811 00:54:26.397518 3313 log.go:181] (0xc000c8adc0) (0xc0008d5860) Stream added, broadcasting: 1\nI0811 00:54:26.399603 3313 log.go:181] (0xc000c8adc0) Reply frame received for 1\nI0811 00:54:26.399635 3313 log.go:181] (0xc000c8adc0) (0xc00050abe0) Create stream\nI0811 00:54:26.399645 3313 log.go:181] (0xc000c8adc0) (0xc00050abe0) Stream added, broadcasting: 3\nI0811 00:54:26.400481 3313 log.go:181] (0xc000c8adc0) Reply frame received for 3\nI0811 00:54:26.400512 3313 log.go:181] (0xc000c8adc0) (0xc0004d4fa0) Create stream\nI0811 00:54:26.400523 3313 log.go:181] (0xc000c8adc0) (0xc0004d4fa0) Stream added, broadcasting: 5\nI0811 00:54:26.401504 3313 log.go:181] (0xc000c8adc0) Reply frame received for 5\nI0811 00:54:26.493385 3313 log.go:181] (0xc000c8adc0) Data frame received for 5\nI0811 00:54:26.493417 3313 log.go:181] (0xc0004d4fa0) (5) Data frame handling\nI0811 00:54:26.493425 3313 log.go:181] (0xc0004d4fa0) (5) Data frame sent\nI0811 00:54:26.493430 3313 log.go:181] (0xc000c8adc0) Data frame received for 5\nI0811 00:54:26.493434 3313 log.go:181] (0xc0004d4fa0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0811 00:54:26.493449 3313 log.go:181] (0xc000c8adc0) Data frame received for 3\nI0811 00:54:26.493453 3313 log.go:181] (0xc00050abe0) (3) Data frame handling\nI0811 00:54:26.494064 3313 log.go:181] (0xc000c8adc0) Data frame received for 1\nI0811 00:54:26.494079 3313 log.go:181] (0xc0008d5860) (1) Data frame handling\nI0811 00:54:26.494096 3313 log.go:181] (0xc0008d5860) (1) Data frame sent\nI0811 00:54:26.494111 3313 log.go:181] (0xc000c8adc0) (0xc0008d5860) Stream removed, broadcasting: 1\nI0811 00:54:26.494219 3313 log.go:181] (0xc000c8adc0) Go away received\nI0811 00:54:26.494425 3313 log.go:181] (0xc000c8adc0) (0xc0008d5860) Stream removed, broadcasting: 1\nI0811 00:54:26.494441 3313 log.go:181] (0xc000c8adc0) (0xc00050abe0) Stream removed, broadcasting: 3\nI0811 00:54:26.494447 3313 log.go:181] (0xc000c8adc0) (0xc0004d4fa0) Stream removed, broadcasting: 5\n" Aug 11 00:54:26.500: INFO: stdout: "" Aug 11 00:54:26.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-5185 execpod-affinity8gpkr -- /bin/sh -x -c nc -zv -t -w 2 10.111.151.75 80' Aug 11 00:54:26.704: INFO: stderr: "I0811 00:54:26.632560 3331 log.go:181] (0xc000d66d10) (0xc000e18320) Create stream\nI0811 00:54:26.632627 3331 log.go:181] (0xc000d66d10) (0xc000e18320) Stream added, broadcasting: 1\nI0811 00:54:26.638310 3331 log.go:181] (0xc000d66d10) Reply frame received for 1\nI0811 00:54:26.638363 3331 log.go:181] (0xc000d66d10) (0xc000919220) Create stream\nI0811 00:54:26.638387 3331 log.go:181] (0xc000d66d10) (0xc000919220) Stream added, broadcasting: 3\nI0811 00:54:26.639347 3331 log.go:181] (0xc000d66d10) Reply frame received for 3\nI0811 00:54:26.639378 3331 log.go:181] (0xc000d66d10) (0xc0008ea500) Create stream\nI0811 00:54:26.639388 3331 log.go:181] (0xc000d66d10) (0xc0008ea500) Stream added, broadcasting: 5\nI0811 00:54:26.640248 3331 log.go:181] (0xc000d66d10) Reply frame received for 5\nI0811 00:54:26.697388 3331 log.go:181] (0xc000d66d10) Data frame received for 3\nI0811 00:54:26.697441 3331 log.go:181] (0xc000919220) (3) Data frame handling\nI0811 00:54:26.697476 3331 log.go:181] (0xc000d66d10) Data frame received for 5\nI0811 00:54:26.697504 3331 log.go:181] (0xc0008ea500) (5) Data frame handling\nI0811 00:54:26.697534 3331 log.go:181] (0xc0008ea500) (5) Data frame sent\nI0811 00:54:26.697552 3331 log.go:181] (0xc000d66d10) Data frame received for 5\nI0811 00:54:26.697568 3331 log.go:181] (0xc0008ea500) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.151.75 80\nConnection to 10.111.151.75 80 port [tcp/http] succeeded!\nI0811 00:54:26.698643 3331 log.go:181] (0xc000d66d10) Data frame received for 1\nI0811 00:54:26.698683 3331 log.go:181] (0xc000e18320) (1) Data frame handling\nI0811 00:54:26.698708 3331 log.go:181] (0xc000e18320) (1) Data frame sent\nI0811 00:54:26.698741 3331 log.go:181] (0xc000d66d10) (0xc000e18320) Stream removed, broadcasting: 1\nI0811 00:54:26.698763 3331 log.go:181] (0xc000d66d10) Go away received\nI0811 00:54:26.699184 3331 log.go:181] (0xc000d66d10) (0xc000e18320) Stream removed, broadcasting: 1\nI0811 00:54:26.699203 3331 log.go:181] (0xc000d66d10) (0xc000919220) Stream removed, broadcasting: 3\nI0811 00:54:26.699213 3331 log.go:181] (0xc000d66d10) (0xc0008ea500) Stream removed, broadcasting: 5\n" Aug 11 00:54:26.705: INFO: stdout: "" Aug 11 00:54:26.705: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-5185 execpod-affinity8gpkr -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.111.151.75:80/ ; done' Aug 11 00:54:27.002: INFO: stderr: "I0811 00:54:26.833891 3349 log.go:181] (0xc000bb9550) (0xc0008cd720) Create stream\nI0811 00:54:26.833961 3349 log.go:181] (0xc000bb9550) (0xc0008cd720) Stream added, broadcasting: 1\nI0811 00:54:26.838833 3349 log.go:181] (0xc000bb9550) Reply frame received for 1\nI0811 00:54:26.838865 3349 log.go:181] (0xc000bb9550) (0xc0008266e0) Create stream\nI0811 00:54:26.838875 3349 log.go:181] (0xc000bb9550) (0xc0008266e0) Stream added, broadcasting: 3\nI0811 00:54:26.839898 3349 log.go:181] (0xc000bb9550) Reply frame received for 3\nI0811 00:54:26.839945 3349 log.go:181] (0xc000bb9550) (0xc0005332c0) Create stream\nI0811 00:54:26.839956 3349 log.go:181] (0xc000bb9550) (0xc0005332c0) Stream added, broadcasting: 5\nI0811 00:54:26.840949 3349 log.go:181] (0xc000bb9550) Reply frame received for 5\nI0811 00:54:26.896952 3349 log.go:181] (0xc000bb9550) Data frame received for 5\nI0811 00:54:26.897000 3349 log.go:181] (0xc0005332c0) (5) Data frame handling\nI0811 00:54:26.897016 3349 log.go:181] (0xc0005332c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.75:80/\nI0811 00:54:26.897038 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.897048 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.897074 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.903867 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.903910 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.903946 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.904269 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.904285 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.904311 3349 log.go:181] (0xc000bb9550) Data frame received for 5\nI0811 00:54:26.904351 3349 log.go:181] (0xc0005332c0) (5) Data frame handling\nI0811 00:54:26.904375 3349 log.go:181] (0xc0005332c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.75:80/\nI0811 00:54:26.904404 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.910804 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.910837 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.910871 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.911628 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.911644 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.911652 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.911757 3349 log.go:181] (0xc000bb9550) Data frame received for 5\nI0811 00:54:26.911777 3349 log.go:181] (0xc0005332c0) (5) Data frame handling\nI0811 00:54:26.911788 3349 log.go:181] (0xc0005332c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.75:80/\nI0811 00:54:26.915567 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.915583 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.915595 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.916238 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.916254 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.916263 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.916307 3349 log.go:181] (0xc000bb9550) Data frame received for 5\nI0811 00:54:26.916341 3349 log.go:181] (0xc0005332c0) (5) Data frame handling\nI0811 00:54:26.916371 3349 log.go:181] (0xc0005332c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.75:80/\nI0811 00:54:26.922821 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.922834 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.922839 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.923235 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.923288 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.923312 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.923332 3349 log.go:181] (0xc000bb9550) Data frame received for 5\nI0811 00:54:26.923345 3349 log.go:181] (0xc0005332c0) (5) Data frame handling\nI0811 00:54:26.923362 3349 log.go:181] (0xc0005332c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.75:80/\nI0811 00:54:26.927652 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.927665 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.927672 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.928056 3349 log.go:181] (0xc000bb9550) Data frame received for 5\nI0811 00:54:26.928075 3349 log.go:181] (0xc0005332c0) (5) Data frame handling\nI0811 00:54:26.928086 3349 log.go:181] (0xc0005332c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.75:80/\nI0811 00:54:26.928098 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.928107 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.928114 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.931981 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.931997 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.932009 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.932259 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.932269 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.932278 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.932310 3349 log.go:181] (0xc000bb9550) Data frame received for 5\nI0811 00:54:26.932330 3349 log.go:181] (0xc0005332c0) (5) Data frame handling\nI0811 00:54:26.932356 3349 log.go:181] (0xc0005332c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.75:80/\nI0811 00:54:26.936028 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.936045 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.936056 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.936942 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.936974 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.936986 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.937003 3349 log.go:181] (0xc000bb9550) Data frame received for 5\nI0811 00:54:26.937017 3349 log.go:181] (0xc0005332c0) (5) Data frame handling\nI0811 00:54:26.937027 3349 log.go:181] (0xc0005332c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.75:80/\nI0811 00:54:26.945584 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.945599 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.945609 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.945908 3349 log.go:181] (0xc000bb9550) Data frame received for 5\nI0811 00:54:26.945924 3349 log.go:181] (0xc0005332c0) (5) Data frame handling\nI0811 00:54:26.945940 3349 log.go:181] (0xc0005332c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.75:80/\nI0811 00:54:26.946050 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.946062 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.946075 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.951127 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.951149 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.951165 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.951694 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.951718 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.951728 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.951747 3349 log.go:181] (0xc000bb9550) Data frame received for 5\nI0811 00:54:26.951769 3349 log.go:181] (0xc0005332c0) (5) Data frame handling\nI0811 00:54:26.951778 3349 log.go:181] (0xc0005332c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.75:80/\nI0811 00:54:26.957323 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.957347 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.957369 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.957979 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.958009 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.958021 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.958045 3349 log.go:181] (0xc000bb9550) Data frame received for 5\nI0811 00:54:26.958068 3349 log.go:181] (0xc0005332c0) (5) Data frame handling\nI0811 00:54:26.958080 3349 log.go:181] (0xc0005332c0) (5) Data frame sent\nI0811 00:54:26.958091 3349 log.go:181] (0xc000bb9550) Data frame received for 5\nI0811 00:54:26.958106 3349 log.go:181] (0xc0005332c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.75:80/\nI0811 00:54:26.958132 3349 log.go:181] (0xc0005332c0) (5) Data frame sent\nI0811 00:54:26.962055 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.962080 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.962098 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.962669 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.962693 3349 log.go:181] (0xc000bb9550) Data frame received for 5\nI0811 00:54:26.962730 3349 log.go:181] (0xc0005332c0) (5) Data frame handling\nI0811 00:54:26.962746 3349 log.go:181] (0xc0005332c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.75:80/\nI0811 00:54:26.962761 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.962771 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.968387 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.968400 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.968413 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.969081 3349 log.go:181] (0xc000bb9550) Data frame received for 5\nI0811 00:54:26.969107 3349 log.go:181] (0xc0005332c0) (5) Data frame handling\nI0811 00:54:26.969131 3349 log.go:181] (0xc0005332c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.75:80/\nI0811 00:54:26.969145 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.969160 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.969175 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.974357 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.974378 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.974398 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.975151 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.975177 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.975189 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.975223 3349 log.go:181] (0xc000bb9550) Data frame received for 5\nI0811 00:54:26.975238 3349 log.go:181] (0xc0005332c0) (5) Data frame handling\nI0811 00:54:26.975253 3349 log.go:181] (0xc0005332c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.75:80/\nI0811 00:54:26.980924 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.980948 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.980968 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.981702 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.981720 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.981731 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.981756 3349 log.go:181] (0xc000bb9550) Data frame received for 5\nI0811 00:54:26.981785 3349 log.go:181] (0xc0005332c0) (5) Data frame handling\nI0811 00:54:26.981800 3349 log.go:181] (0xc0005332c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.75:80/\nI0811 00:54:26.989456 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.989487 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.989511 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.990155 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.990171 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.990186 3349 log.go:181] (0xc000bb9550) Data frame received for 5\nI0811 00:54:26.990210 3349 log.go:181] (0xc0005332c0) (5) Data frame handling\nI0811 00:54:26.990221 3349 log.go:181] (0xc0005332c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.75:80/\nI0811 00:54:26.990237 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.993424 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.993454 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.993475 3349 log.go:181] (0xc0008266e0) (3) Data frame sent\nI0811 00:54:26.995135 3349 log.go:181] (0xc000bb9550) Data frame received for 3\nI0811 00:54:26.995276 3349 log.go:181] (0xc0008266e0) (3) Data frame handling\nI0811 00:54:26.995322 3349 log.go:181] (0xc000bb9550) Data frame received for 5\nI0811 00:54:26.995341 3349 log.go:181] (0xc0005332c0) (5) Data frame handling\nI0811 00:54:26.996444 3349 log.go:181] (0xc000bb9550) Data frame received for 1\nI0811 00:54:26.996466 3349 log.go:181] (0xc0008cd720) (1) Data frame handling\nI0811 00:54:26.996474 3349 log.go:181] (0xc0008cd720) (1) Data frame sent\nI0811 00:54:26.996485 3349 log.go:181] (0xc000bb9550) (0xc0008cd720) Stream removed, broadcasting: 1\nI0811 00:54:26.996533 3349 log.go:181] (0xc000bb9550) Go away received\nI0811 00:54:26.996836 3349 log.go:181] (0xc000bb9550) (0xc0008cd720) Stream removed, broadcasting: 1\nI0811 00:54:26.996849 3349 log.go:181] (0xc000bb9550) (0xc0008266e0) Stream removed, broadcasting: 3\nI0811 00:54:26.996855 3349 log.go:181] (0xc000bb9550) (0xc0005332c0) Stream removed, broadcasting: 5\n" Aug 11 00:54:27.003: INFO: stdout: "\naffinity-clusterip-timeout-lczpx\naffinity-clusterip-timeout-lczpx\naffinity-clusterip-timeout-lczpx\naffinity-clusterip-timeout-lczpx\naffinity-clusterip-timeout-lczpx\naffinity-clusterip-timeout-lczpx\naffinity-clusterip-timeout-lczpx\naffinity-clusterip-timeout-lczpx\naffinity-clusterip-timeout-lczpx\naffinity-clusterip-timeout-lczpx\naffinity-clusterip-timeout-lczpx\naffinity-clusterip-timeout-lczpx\naffinity-clusterip-timeout-lczpx\naffinity-clusterip-timeout-lczpx\naffinity-clusterip-timeout-lczpx\naffinity-clusterip-timeout-lczpx" Aug 11 00:54:27.003: INFO: Received response from host: affinity-clusterip-timeout-lczpx Aug 11 00:54:27.003: INFO: Received response from host: affinity-clusterip-timeout-lczpx Aug 11 00:54:27.003: INFO: Received response from host: affinity-clusterip-timeout-lczpx Aug 11 00:54:27.003: INFO: Received response from host: affinity-clusterip-timeout-lczpx Aug 11 00:54:27.003: INFO: Received response from host: affinity-clusterip-timeout-lczpx Aug 11 00:54:27.003: INFO: Received response from host: affinity-clusterip-timeout-lczpx Aug 11 00:54:27.003: INFO: Received response from host: affinity-clusterip-timeout-lczpx Aug 11 00:54:27.003: INFO: Received response from host: affinity-clusterip-timeout-lczpx Aug 11 00:54:27.003: INFO: Received response from host: affinity-clusterip-timeout-lczpx Aug 11 00:54:27.003: INFO: Received response from host: affinity-clusterip-timeout-lczpx Aug 11 00:54:27.003: INFO: Received response from host: affinity-clusterip-timeout-lczpx Aug 11 00:54:27.003: INFO: Received response from host: affinity-clusterip-timeout-lczpx Aug 11 00:54:27.003: INFO: Received response from host: affinity-clusterip-timeout-lczpx Aug 11 00:54:27.003: INFO: Received response from host: affinity-clusterip-timeout-lczpx Aug 11 00:54:27.003: INFO: Received response from host: affinity-clusterip-timeout-lczpx Aug 11 00:54:27.003: INFO: Received response from host: affinity-clusterip-timeout-lczpx Aug 11 00:54:27.003: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-5185 execpod-affinity8gpkr -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.111.151.75:80/' Aug 11 00:54:27.221: INFO: stderr: "I0811 00:54:27.138993 3367 log.go:181] (0xc000d1b970) (0xc0009894a0) Create stream\nI0811 00:54:27.139077 3367 log.go:181] (0xc000d1b970) (0xc0009894a0) Stream added, broadcasting: 1\nI0811 00:54:27.142241 3367 log.go:181] (0xc000d1b970) Reply frame received for 1\nI0811 00:54:27.142302 3367 log.go:181] (0xc000d1b970) (0xc000804b40) Create stream\nI0811 00:54:27.142349 3367 log.go:181] (0xc000d1b970) (0xc000804b40) Stream added, broadcasting: 3\nI0811 00:54:27.143243 3367 log.go:181] (0xc000d1b970) Reply frame received for 3\nI0811 00:54:27.143275 3367 log.go:181] (0xc000d1b970) (0xc00082d2c0) Create stream\nI0811 00:54:27.143283 3367 log.go:181] (0xc000d1b970) (0xc00082d2c0) Stream added, broadcasting: 5\nI0811 00:54:27.144011 3367 log.go:181] (0xc000d1b970) Reply frame received for 5\nI0811 00:54:27.209380 3367 log.go:181] (0xc000d1b970) Data frame received for 5\nI0811 00:54:27.209416 3367 log.go:181] (0xc00082d2c0) (5) Data frame handling\nI0811 00:54:27.209434 3367 log.go:181] (0xc00082d2c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.111.151.75:80/\nI0811 00:54:27.214364 3367 log.go:181] (0xc000d1b970) Data frame received for 3\nI0811 00:54:27.214390 3367 log.go:181] (0xc000804b40) (3) Data frame handling\nI0811 00:54:27.214407 3367 log.go:181] (0xc000804b40) (3) Data frame sent\nI0811 00:54:27.215014 3367 log.go:181] (0xc000d1b970) Data frame received for 5\nI0811 00:54:27.215025 3367 log.go:181] (0xc00082d2c0) (5) Data frame handling\nI0811 00:54:27.215055 3367 log.go:181] (0xc000d1b970) Data frame received for 3\nI0811 00:54:27.215084 3367 log.go:181] (0xc000804b40) (3) Data frame handling\nI0811 00:54:27.216538 3367 log.go:181] (0xc000d1b970) Data frame received for 1\nI0811 00:54:27.216572 3367 log.go:181] (0xc0009894a0) (1) Data frame handling\nI0811 00:54:27.216581 3367 log.go:181] (0xc0009894a0) (1) Data frame sent\nI0811 00:54:27.216591 3367 log.go:181] (0xc000d1b970) (0xc0009894a0) Stream removed, broadcasting: 1\nI0811 00:54:27.216810 3367 log.go:181] (0xc000d1b970) Go away received\nI0811 00:54:27.216936 3367 log.go:181] (0xc000d1b970) (0xc0009894a0) Stream removed, broadcasting: 1\nI0811 00:54:27.216949 3367 log.go:181] (0xc000d1b970) (0xc000804b40) Stream removed, broadcasting: 3\nI0811 00:54:27.216953 3367 log.go:181] (0xc000d1b970) (0xc00082d2c0) Stream removed, broadcasting: 5\n" Aug 11 00:54:27.221: INFO: stdout: "affinity-clusterip-timeout-lczpx" Aug 11 00:54:42.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-5185 execpod-affinity8gpkr -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.111.151.75:80/' Aug 11 00:54:42.445: INFO: stderr: "I0811 00:54:42.364256 3385 log.go:181] (0xc0005afce0) (0xc00092f360) Create stream\nI0811 00:54:42.364328 3385 log.go:181] (0xc0005afce0) (0xc00092f360) Stream added, broadcasting: 1\nI0811 00:54:42.366480 3385 log.go:181] (0xc0005afce0) Reply frame received for 1\nI0811 00:54:42.366514 3385 log.go:181] (0xc0005afce0) (0xc000973540) Create stream\nI0811 00:54:42.366523 3385 log.go:181] (0xc0005afce0) (0xc000973540) Stream added, broadcasting: 3\nI0811 00:54:42.367362 3385 log.go:181] (0xc0005afce0) Reply frame received for 3\nI0811 00:54:42.367439 3385 log.go:181] (0xc0005afce0) (0xc00082e460) Create stream\nI0811 00:54:42.367453 3385 log.go:181] (0xc0005afce0) (0xc00082e460) Stream added, broadcasting: 5\nI0811 00:54:42.368319 3385 log.go:181] (0xc0005afce0) Reply frame received for 5\nI0811 00:54:42.431787 3385 log.go:181] (0xc0005afce0) Data frame received for 5\nI0811 00:54:42.431814 3385 log.go:181] (0xc00082e460) (5) Data frame handling\nI0811 00:54:42.431826 3385 log.go:181] (0xc00082e460) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.111.151.75:80/\nI0811 00:54:42.437152 3385 log.go:181] (0xc0005afce0) Data frame received for 3\nI0811 00:54:42.437181 3385 log.go:181] (0xc000973540) (3) Data frame handling\nI0811 00:54:42.437199 3385 log.go:181] (0xc000973540) (3) Data frame sent\nI0811 00:54:42.437684 3385 log.go:181] (0xc0005afce0) Data frame received for 3\nI0811 00:54:42.437725 3385 log.go:181] (0xc000973540) (3) Data frame handling\nI0811 00:54:42.437785 3385 log.go:181] (0xc0005afce0) Data frame received for 5\nI0811 00:54:42.437813 3385 log.go:181] (0xc00082e460) (5) Data frame handling\nI0811 00:54:42.439742 3385 log.go:181] (0xc0005afce0) Data frame received for 1\nI0811 00:54:42.439762 3385 log.go:181] (0xc00092f360) (1) Data frame handling\nI0811 00:54:42.439774 3385 log.go:181] (0xc00092f360) (1) Data frame sent\nI0811 00:54:42.439793 3385 log.go:181] (0xc0005afce0) (0xc00092f360) Stream removed, broadcasting: 1\nI0811 00:54:42.439819 3385 log.go:181] (0xc0005afce0) Go away received\nI0811 00:54:42.440270 3385 log.go:181] (0xc0005afce0) (0xc00092f360) Stream removed, broadcasting: 1\nI0811 00:54:42.440295 3385 log.go:181] (0xc0005afce0) (0xc000973540) Stream removed, broadcasting: 3\nI0811 00:54:42.440304 3385 log.go:181] (0xc0005afce0) (0xc00082e460) Stream removed, broadcasting: 5\n" Aug 11 00:54:42.445: INFO: stdout: "affinity-clusterip-timeout-p9kjf" Aug 11 00:54:42.445: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-5185, will wait for the garbage collector to delete the pods Aug 11 00:54:42.564: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 6.896903ms Aug 11 00:54:43.165: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 600.403658ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:54:48.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5185" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:42.059 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":260,"skipped":4224,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:54:48.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 11 00:54:49.062: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 11 00:54:51.073: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704089, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704089, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704089, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704089, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 11 00:54:53.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704089, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704089, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704089, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704089, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 11 00:54:56.101: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:54:56.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:54:57.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-312" for this suite. STEP: Destroying namespace "webhook-312-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.838 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":303,"completed":261,"skipped":4234,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:54:57.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 11 00:54:57.466: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1bbda8c-555e-49b9-98b9-f6f1f1f90f95" in namespace "downward-api-6203" to be "Succeeded or Failed" Aug 11 00:54:57.508: INFO: Pod "downwardapi-volume-c1bbda8c-555e-49b9-98b9-f6f1f1f90f95": Phase="Pending", Reason="", readiness=false. Elapsed: 41.516767ms Aug 11 00:54:59.550: INFO: Pod "downwardapi-volume-c1bbda8c-555e-49b9-98b9-f6f1f1f90f95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083932962s Aug 11 00:55:01.554: INFO: Pod "downwardapi-volume-c1bbda8c-555e-49b9-98b9-f6f1f1f90f95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087709936s STEP: Saw pod success Aug 11 00:55:01.554: INFO: Pod "downwardapi-volume-c1bbda8c-555e-49b9-98b9-f6f1f1f90f95" satisfied condition "Succeeded or Failed" Aug 11 00:55:01.556: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c1bbda8c-555e-49b9-98b9-f6f1f1f90f95 container client-container: STEP: delete the pod Aug 11 00:55:01.722: INFO: Waiting for pod downwardapi-volume-c1bbda8c-555e-49b9-98b9-f6f1f1f90f95 to disappear Aug 11 00:55:01.768: INFO: Pod downwardapi-volume-c1bbda8c-555e-49b9-98b9-f6f1f1f90f95 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:55:01.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6203" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":262,"skipped":4250,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:55:01.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Aug 11 00:55:06.413: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9786 pod-service-account-53fdcce8-aea6-4d9d-8e7a-a606fb487baf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Aug 11 00:55:06.640: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9786 pod-service-account-53fdcce8-aea6-4d9d-8e7a-a606fb487baf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Aug 11 00:55:06.843: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9786 pod-service-account-53fdcce8-aea6-4d9d-8e7a-a606fb487baf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:55:07.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9786" for this suite. • [SLOW TEST:5.342 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":303,"completed":263,"skipped":4254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:55:07.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:55:07.189: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 11 00:55:07.233: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:07.237: INFO: Number of nodes with available pods: 0 Aug 11 00:55:07.237: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:55:08.242: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:08.245: INFO: Number of nodes with available pods: 0 Aug 11 00:55:08.245: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:55:09.241: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:09.244: INFO: Number of nodes with available pods: 0 Aug 11 00:55:09.244: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:55:10.242: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:10.245: INFO: Number of nodes with available pods: 0 Aug 11 00:55:10.245: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:55:11.241: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:11.244: INFO: Number of nodes with available pods: 1 Aug 11 00:55:11.244: INFO: Node latest-worker2 is running more than one daemon pod Aug 11 00:55:12.243: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:12.248: INFO: Number of nodes with available pods: 1 Aug 11 00:55:12.248: INFO: Node latest-worker2 is running more than one daemon pod Aug 11 00:55:13.242: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:13.246: INFO: Number of nodes with available pods: 2 Aug 11 00:55:13.246: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 11 00:55:13.308: INFO: Wrong image for pod: daemon-set-27gxj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:13.308: INFO: Wrong image for pod: daemon-set-grcd8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:13.331: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:14.337: INFO: Wrong image for pod: daemon-set-27gxj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:14.337: INFO: Wrong image for pod: daemon-set-grcd8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:14.341: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:15.335: INFO: Wrong image for pod: daemon-set-27gxj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:15.335: INFO: Wrong image for pod: daemon-set-grcd8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:15.339: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:16.336: INFO: Wrong image for pod: daemon-set-27gxj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:16.336: INFO: Wrong image for pod: daemon-set-grcd8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:16.341: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:17.337: INFO: Wrong image for pod: daemon-set-27gxj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:17.337: INFO: Wrong image for pod: daemon-set-grcd8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:17.337: INFO: Pod daemon-set-grcd8 is not available Aug 11 00:55:17.340: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:18.337: INFO: Wrong image for pod: daemon-set-27gxj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:18.337: INFO: Wrong image for pod: daemon-set-grcd8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:18.337: INFO: Pod daemon-set-grcd8 is not available Aug 11 00:55:18.342: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:19.336: INFO: Wrong image for pod: daemon-set-27gxj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:19.336: INFO: Wrong image for pod: daemon-set-grcd8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:19.336: INFO: Pod daemon-set-grcd8 is not available Aug 11 00:55:19.345: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:20.337: INFO: Wrong image for pod: daemon-set-27gxj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:20.337: INFO: Wrong image for pod: daemon-set-grcd8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:20.337: INFO: Pod daemon-set-grcd8 is not available Aug 11 00:55:20.341: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:21.336: INFO: Wrong image for pod: daemon-set-27gxj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:21.336: INFO: Wrong image for pod: daemon-set-grcd8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:21.336: INFO: Pod daemon-set-grcd8 is not available Aug 11 00:55:21.341: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:22.336: INFO: Wrong image for pod: daemon-set-27gxj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:22.336: INFO: Wrong image for pod: daemon-set-grcd8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:22.336: INFO: Pod daemon-set-grcd8 is not available Aug 11 00:55:22.346: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:23.335: INFO: Wrong image for pod: daemon-set-27gxj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:23.335: INFO: Wrong image for pod: daemon-set-grcd8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:23.335: INFO: Pod daemon-set-grcd8 is not available Aug 11 00:55:23.339: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:24.337: INFO: Wrong image for pod: daemon-set-27gxj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:24.337: INFO: Pod daemon-set-j75fj is not available Aug 11 00:55:24.341: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:25.370: INFO: Wrong image for pod: daemon-set-27gxj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:25.370: INFO: Pod daemon-set-j75fj is not available Aug 11 00:55:25.373: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:26.336: INFO: Wrong image for pod: daemon-set-27gxj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:26.336: INFO: Pod daemon-set-j75fj is not available Aug 11 00:55:26.342: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:27.335: INFO: Wrong image for pod: daemon-set-27gxj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:27.335: INFO: Pod daemon-set-j75fj is not available Aug 11 00:55:27.341: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:28.336: INFO: Wrong image for pod: daemon-set-27gxj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:28.343: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:29.336: INFO: Wrong image for pod: daemon-set-27gxj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 11 00:55:29.336: INFO: Pod daemon-set-27gxj is not available Aug 11 00:55:29.340: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:30.336: INFO: Pod daemon-set-bltpn is not available Aug 11 00:55:30.340: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 11 00:55:30.345: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:30.349: INFO: Number of nodes with available pods: 1 Aug 11 00:55:30.349: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:55:31.353: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:31.356: INFO: Number of nodes with available pods: 1 Aug 11 00:55:31.356: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:55:32.354: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:32.357: INFO: Number of nodes with available pods: 1 Aug 11 00:55:32.357: INFO: Node latest-worker is running more than one daemon pod Aug 11 00:55:33.354: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 00:55:33.357: INFO: Number of nodes with available pods: 2 Aug 11 00:55:33.357: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2401, will wait for the garbage collector to delete the pods Aug 11 00:55:33.427: INFO: Deleting DaemonSet.extensions daemon-set took: 5.615822ms Aug 11 00:55:33.827: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.235866ms Aug 11 00:55:43.238: INFO: Number of nodes with available pods: 0 Aug 11 00:55:43.238: INFO: Number of running nodes: 0, number of available pods: 0 Aug 11 00:55:43.240: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2401/daemonsets","resourceVersion":"6063979"},"items":null} Aug 11 00:55:43.242: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2401/pods","resourceVersion":"6063979"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:55:43.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2401" for this suite. • [SLOW TEST:36.135 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":303,"completed":264,"skipped":4303,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:55:43.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 11 00:55:43.329: INFO: Waiting up to 5m0s for pod "pod-89220315-1b8c-4517-a6c6-c83f0e5be185" in namespace "emptydir-3654" to be "Succeeded or Failed" Aug 11 00:55:43.333: INFO: Pod "pod-89220315-1b8c-4517-a6c6-c83f0e5be185": Phase="Pending", Reason="", readiness=false. Elapsed: 3.431679ms Aug 11 00:55:45.336: INFO: Pod "pod-89220315-1b8c-4517-a6c6-c83f0e5be185": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006862006s Aug 11 00:55:47.342: INFO: Pod "pod-89220315-1b8c-4517-a6c6-c83f0e5be185": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013018675s STEP: Saw pod success Aug 11 00:55:47.342: INFO: Pod "pod-89220315-1b8c-4517-a6c6-c83f0e5be185" satisfied condition "Succeeded or Failed" Aug 11 00:55:47.345: INFO: Trying to get logs from node latest-worker2 pod pod-89220315-1b8c-4517-a6c6-c83f0e5be185 container test-container: STEP: delete the pod Aug 11 00:55:47.365: INFO: Waiting for pod pod-89220315-1b8c-4517-a6c6-c83f0e5be185 to disappear Aug 11 00:55:47.390: INFO: Pod pod-89220315-1b8c-4517-a6c6-c83f0e5be185 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:55:47.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3654" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":265,"skipped":4311,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:55:47.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 11 00:55:47.463: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 11 00:55:47.469: INFO: Waiting for terminating namespaces to be deleted... Aug 11 00:55:47.471: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 11 00:55:47.475: INFO: coredns-f9fd979d6-s745j from kube-system started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 11 00:55:47.475: INFO: Container coredns ready: true, restart count 0 Aug 11 00:55:47.475: INFO: coredns-f9fd979d6-zs4sj from kube-system started at 2020-07-19 21:39:36 +0000 UTC (1 container statuses recorded) Aug 11 00:55:47.475: INFO: Container coredns ready: true, restart count 0 Aug 11 00:55:47.475: INFO: kindnet-46dnt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 11 00:55:47.475: INFO: Container kindnet-cni ready: true, restart count 0 Aug 11 00:55:47.475: INFO: kube-proxy-sxpg9 from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 11 00:55:47.475: INFO: Container kube-proxy ready: true, restart count 0 Aug 11 00:55:47.475: INFO: local-path-provisioner-8b46957d4-2gzpd from local-path-storage started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 11 00:55:47.475: INFO: Container local-path-provisioner ready: true, restart count 0 Aug 11 00:55:47.475: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 11 00:55:47.478: INFO: kindnet-g6zbt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 11 00:55:47.478: INFO: Container kindnet-cni ready: true, restart count 0 Aug 11 00:55:47.478: INFO: kube-proxy-nsnzn from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 11 00:55:47.478: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Aug 11 00:55:47.551: INFO: Pod coredns-f9fd979d6-s745j requesting resource cpu=100m on Node latest-worker Aug 11 00:55:47.551: INFO: Pod coredns-f9fd979d6-zs4sj requesting resource cpu=100m on Node latest-worker Aug 11 00:55:47.551: INFO: Pod kindnet-46dnt requesting resource cpu=100m on Node latest-worker Aug 11 00:55:47.551: INFO: Pod kindnet-g6zbt requesting resource cpu=100m on Node latest-worker2 Aug 11 00:55:47.551: INFO: Pod kube-proxy-nsnzn requesting resource cpu=0m on Node latest-worker2 Aug 11 00:55:47.551: INFO: Pod kube-proxy-sxpg9 requesting resource cpu=0m on Node latest-worker Aug 11 00:55:47.551: INFO: Pod local-path-provisioner-8b46957d4-2gzpd requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Aug 11 00:55:47.551: INFO: Creating a pod which consumes cpu=10990m on Node latest-worker Aug 11 00:55:47.576: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-1b4a8e2c-c2e4-4264-98e4-c0e2497a1b43.162a10ac9329354f], Reason = [Created], Message = [Created container filler-pod-1b4a8e2c-c2e4-4264-98e4-c0e2497a1b43] STEP: Considering event: Type = [Normal], Name = [filler-pod-9fb7a0f5-001d-4f72-b81e-9977fbe20100.162a10ac9e48edb5], Reason = [Started], Message = [Started container filler-pod-9fb7a0f5-001d-4f72-b81e-9977fbe20100] STEP: Considering event: Type = [Normal], Name = [filler-pod-9fb7a0f5-001d-4f72-b81e-9977fbe20100.162a10abba494577], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5881/filler-pod-9fb7a0f5-001d-4f72-b81e-9977fbe20100 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-1b4a8e2c-c2e4-4264-98e4-c0e2497a1b43.162a10abbc4fc045], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5881/filler-pod-1b4a8e2c-c2e4-4264-98e4-c0e2497a1b43 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-1b4a8e2c-c2e4-4264-98e4-c0e2497a1b43.162a10ac259ba3ca], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9fb7a0f5-001d-4f72-b81e-9977fbe20100.162a10ac875efd40], Reason = [Created], Message = [Created container filler-pod-9fb7a0f5-001d-4f72-b81e-9977fbe20100] STEP: Considering event: Type = [Normal], Name = [filler-pod-9fb7a0f5-001d-4f72-b81e-9977fbe20100.162a10ac11b138c4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-1b4a8e2c-c2e4-4264-98e4-c0e2497a1b43.162a10aca45fa848], Reason = [Started], Message = [Started container filler-pod-1b4a8e2c-c2e4-4264-98e4-c0e2497a1b43] STEP: Considering event: Type = [Warning], Name = [additional-pod.162a10ad24028b00], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.162a10ad25aff227], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:55:54.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5881" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.342 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":303,"completed":266,"skipped":4317,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:55:54.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-9b1817c4-1f20-4e0c-baef-d5964333a04a STEP: Creating a pod to test consume configMaps Aug 11 00:55:54.916: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a31a622c-7389-41d7-96ed-6ab8a3ef940e" in namespace "projected-1409" to be "Succeeded or Failed" Aug 11 00:55:54.921: INFO: Pod "pod-projected-configmaps-a31a622c-7389-41d7-96ed-6ab8a3ef940e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.744947ms Aug 11 00:55:56.926: INFO: Pod "pod-projected-configmaps-a31a622c-7389-41d7-96ed-6ab8a3ef940e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009452844s Aug 11 00:55:58.931: INFO: Pod "pod-projected-configmaps-a31a622c-7389-41d7-96ed-6ab8a3ef940e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014036609s STEP: Saw pod success Aug 11 00:55:58.931: INFO: Pod "pod-projected-configmaps-a31a622c-7389-41d7-96ed-6ab8a3ef940e" satisfied condition "Succeeded or Failed" Aug 11 00:55:58.934: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-a31a622c-7389-41d7-96ed-6ab8a3ef940e container projected-configmap-volume-test: STEP: delete the pod Aug 11 00:55:58.970: INFO: Waiting for pod pod-projected-configmaps-a31a622c-7389-41d7-96ed-6ab8a3ef940e to disappear Aug 11 00:55:58.975: INFO: Pod pod-projected-configmaps-a31a622c-7389-41d7-96ed-6ab8a3ef940e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:55:58.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1409" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":267,"skipped":4330,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:55:58.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:56:04.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-551" for this suite. • [SLOW TEST:5.506 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":303,"completed":268,"skipped":4350,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:56:04.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0811 00:56:14.585781 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 11 00:57:16.606: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:57:16.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1496" for this suite. • [SLOW TEST:72.126 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":303,"completed":269,"skipped":4381,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:57:16.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-929bdf85-9cc0-45bb-a116-94b626d6523c STEP: Creating configMap with name cm-test-opt-upd-f6ecfe28-e1f5-4a21-8ed5-81d032873203 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-929bdf85-9cc0-45bb-a116-94b626d6523c STEP: Updating configmap cm-test-opt-upd-f6ecfe28-e1f5-4a21-8ed5-81d032873203 STEP: Creating configMap with name cm-test-opt-create-37cb0ebf-45c7-473f-b7cd-fb36f0633a49 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:57:24.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4595" for this suite. • [SLOW TEST:8.279 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":270,"skipped":4387,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:57:24.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5136 STEP: creating service affinity-nodeport-transition in namespace services-5136 STEP: creating replication controller affinity-nodeport-transition in namespace services-5136 I0811 00:57:25.066690 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-5136, replica count: 3 I0811 00:57:28.117061 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0811 00:57:31.117277 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 11 00:57:31.134: INFO: Creating new exec pod Aug 11 00:57:36.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-5136 execpod-affinitysn6wq -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Aug 11 00:57:39.233: INFO: stderr: "I0811 00:57:39.117394 3450 log.go:181] (0xc0001411e0) (0xc0008a6a00) Create stream\nI0811 00:57:39.117443 3450 log.go:181] (0xc0001411e0) (0xc0008a6a00) Stream added, broadcasting: 1\nI0811 00:57:39.121344 3450 log.go:181] (0xc0001411e0) Reply frame received for 1\nI0811 00:57:39.121396 3450 log.go:181] (0xc0001411e0) (0xc0008a7a40) Create stream\nI0811 00:57:39.121413 3450 log.go:181] (0xc0001411e0) (0xc0008a7a40) Stream added, broadcasting: 3\nI0811 00:57:39.122909 3450 log.go:181] (0xc0001411e0) Reply frame received for 3\nI0811 00:57:39.122942 3450 log.go:181] (0xc0001411e0) (0xc0008361e0) Create stream\nI0811 00:57:39.122963 3450 log.go:181] (0xc0001411e0) (0xc0008361e0) Stream added, broadcasting: 5\nI0811 00:57:39.127133 3450 log.go:181] (0xc0001411e0) Reply frame received for 5\nI0811 00:57:39.223875 3450 log.go:181] (0xc0001411e0) Data frame received for 3\nI0811 00:57:39.223910 3450 log.go:181] (0xc0008a7a40) (3) Data frame handling\nI0811 00:57:39.223957 3450 log.go:181] (0xc0001411e0) Data frame received for 5\nI0811 00:57:39.223996 3450 log.go:181] (0xc0008361e0) (5) Data frame handling\nI0811 00:57:39.224032 3450 log.go:181] (0xc0008361e0) (5) Data frame sent\nI0811 00:57:39.224067 3450 log.go:181] (0xc0001411e0) Data frame received for 5\nI0811 00:57:39.224081 3450 log.go:181] (0xc0008361e0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0811 00:57:39.225553 3450 log.go:181] (0xc0001411e0) Data frame received for 1\nI0811 00:57:39.225587 3450 log.go:181] (0xc0008a6a00) (1) Data frame handling\nI0811 00:57:39.225627 3450 log.go:181] (0xc0008a6a00) (1) Data frame sent\nI0811 00:57:39.225704 3450 log.go:181] (0xc0001411e0) (0xc0008a6a00) Stream removed, broadcasting: 1\nI0811 00:57:39.226033 3450 log.go:181] (0xc0001411e0) Go away received\nI0811 00:57:39.226350 3450 log.go:181] (0xc0001411e0) (0xc0008a6a00) Stream removed, broadcasting: 1\nI0811 00:57:39.226372 3450 log.go:181] (0xc0001411e0) (0xc0008a7a40) Stream removed, broadcasting: 3\nI0811 00:57:39.226384 3450 log.go:181] (0xc0001411e0) (0xc0008361e0) Stream removed, broadcasting: 5\n" Aug 11 00:57:39.233: INFO: stdout: "" Aug 11 00:57:39.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-5136 execpod-affinitysn6wq -- /bin/sh -x -c nc -zv -t -w 2 10.99.14.212 80' Aug 11 00:57:39.430: INFO: stderr: "I0811 00:57:39.357395 3468 log.go:181] (0xc0009351e0) (0xc000e92780) Create stream\nI0811 00:57:39.357453 3468 log.go:181] (0xc0009351e0) (0xc000e92780) Stream added, broadcasting: 1\nI0811 00:57:39.361316 3468 log.go:181] (0xc0009351e0) Reply frame received for 1\nI0811 00:57:39.361359 3468 log.go:181] (0xc0009351e0) (0xc00089c820) Create stream\nI0811 00:57:39.361369 3468 log.go:181] (0xc0009351e0) (0xc00089c820) Stream added, broadcasting: 3\nI0811 00:57:39.362260 3468 log.go:181] (0xc0009351e0) Reply frame received for 3\nI0811 00:57:39.362290 3468 log.go:181] (0xc0009351e0) (0xc00087e000) Create stream\nI0811 00:57:39.362300 3468 log.go:181] (0xc0009351e0) (0xc00087e000) Stream added, broadcasting: 5\nI0811 00:57:39.363327 3468 log.go:181] (0xc0009351e0) Reply frame received for 5\nI0811 00:57:39.424571 3468 log.go:181] (0xc0009351e0) Data frame received for 5\nI0811 00:57:39.424825 3468 log.go:181] (0xc00087e000) (5) Data frame handling\nI0811 00:57:39.424979 3468 log.go:181] (0xc00087e000) (5) Data frame sent\n+ nc -zv -t -w 2 10.99.14.212 80\nConnection to 10.99.14.212 80 port [tcp/http] succeeded!\nI0811 00:57:39.425110 3468 log.go:181] (0xc0009351e0) Data frame received for 5\nI0811 00:57:39.425149 3468 log.go:181] (0xc00087e000) (5) Data frame handling\nI0811 00:57:39.425183 3468 log.go:181] (0xc0009351e0) Data frame received for 3\nI0811 00:57:39.425201 3468 log.go:181] (0xc00089c820) (3) Data frame handling\nI0811 00:57:39.426294 3468 log.go:181] (0xc0009351e0) Data frame received for 1\nI0811 00:57:39.426320 3468 log.go:181] (0xc000e92780) (1) Data frame handling\nI0811 00:57:39.426331 3468 log.go:181] (0xc000e92780) (1) Data frame sent\nI0811 00:57:39.426426 3468 log.go:181] (0xc0009351e0) (0xc000e92780) Stream removed, broadcasting: 1\nI0811 00:57:39.426510 3468 log.go:181] (0xc0009351e0) Go away received\nI0811 00:57:39.426888 3468 log.go:181] (0xc0009351e0) (0xc000e92780) Stream removed, broadcasting: 1\nI0811 00:57:39.426911 3468 log.go:181] (0xc0009351e0) (0xc00089c820) Stream removed, broadcasting: 3\nI0811 00:57:39.426919 3468 log.go:181] (0xc0009351e0) (0xc00087e000) Stream removed, broadcasting: 5\n" Aug 11 00:57:39.430: INFO: stdout: "" Aug 11 00:57:39.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-5136 execpod-affinitysn6wq -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32354' Aug 11 00:57:39.670: INFO: stderr: "I0811 00:57:39.591265 3486 log.go:181] (0xc000940d10) (0xc00096c460) Create stream\nI0811 00:57:39.591309 3486 log.go:181] (0xc000940d10) (0xc00096c460) Stream added, broadcasting: 1\nI0811 00:57:39.594863 3486 log.go:181] (0xc000940d10) Reply frame received for 1\nI0811 00:57:39.594896 3486 log.go:181] (0xc000940d10) (0xc000b8bc20) Create stream\nI0811 00:57:39.594903 3486 log.go:181] (0xc000940d10) (0xc000b8bc20) Stream added, broadcasting: 3\nI0811 00:57:39.595489 3486 log.go:181] (0xc000940d10) Reply frame received for 3\nI0811 00:57:39.595512 3486 log.go:181] (0xc000940d10) (0xc0005f4000) Create stream\nI0811 00:57:39.595519 3486 log.go:181] (0xc000940d10) (0xc0005f4000) Stream added, broadcasting: 5\nI0811 00:57:39.596108 3486 log.go:181] (0xc000940d10) Reply frame received for 5\nI0811 00:57:39.661618 3486 log.go:181] (0xc000940d10) Data frame received for 5\nI0811 00:57:39.661673 3486 log.go:181] (0xc0005f4000) (5) Data frame handling\nI0811 00:57:39.661702 3486 log.go:181] (0xc0005f4000) (5) Data frame sent\nI0811 00:57:39.661718 3486 log.go:181] (0xc000940d10) Data frame received for 5\nI0811 00:57:39.661729 3486 log.go:181] (0xc0005f4000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 32354\nConnection to 172.18.0.14 32354 port [tcp/32354] succeeded!\nI0811 00:57:39.661803 3486 log.go:181] (0xc0005f4000) (5) Data frame sent\nI0811 00:57:39.662240 3486 log.go:181] (0xc000940d10) Data frame received for 3\nI0811 00:57:39.662289 3486 log.go:181] (0xc000b8bc20) (3) Data frame handling\nI0811 00:57:39.662337 3486 log.go:181] (0xc000940d10) Data frame received for 5\nI0811 00:57:39.662376 3486 log.go:181] (0xc0005f4000) (5) Data frame handling\nI0811 00:57:39.663961 3486 log.go:181] (0xc000940d10) Data frame received for 1\nI0811 00:57:39.663981 3486 log.go:181] (0xc00096c460) (1) Data frame handling\nI0811 00:57:39.663991 3486 log.go:181] (0xc00096c460) (1) Data frame sent\nI0811 00:57:39.664003 3486 log.go:181] (0xc000940d10) (0xc00096c460) Stream removed, broadcasting: 1\nI0811 00:57:39.664024 3486 log.go:181] (0xc000940d10) Go away received\nI0811 00:57:39.664496 3486 log.go:181] (0xc000940d10) (0xc00096c460) Stream removed, broadcasting: 1\nI0811 00:57:39.664520 3486 log.go:181] (0xc000940d10) (0xc000b8bc20) Stream removed, broadcasting: 3\nI0811 00:57:39.664535 3486 log.go:181] (0xc000940d10) (0xc0005f4000) Stream removed, broadcasting: 5\n" Aug 11 00:57:39.670: INFO: stdout: "" Aug 11 00:57:39.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-5136 execpod-affinitysn6wq -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32354' Aug 11 00:57:39.898: INFO: stderr: "I0811 00:57:39.801715 3504 log.go:181] (0xc000b18d10) (0xc000a2a460) Create stream\nI0811 00:57:39.801770 3504 log.go:181] (0xc000b18d10) (0xc000a2a460) Stream added, broadcasting: 1\nI0811 00:57:39.807932 3504 log.go:181] (0xc000b18d10) Reply frame received for 1\nI0811 00:57:39.807964 3504 log.go:181] (0xc000b18d10) (0xc0008bd0e0) Create stream\nI0811 00:57:39.807972 3504 log.go:181] (0xc000b18d10) (0xc0008bd0e0) Stream added, broadcasting: 3\nI0811 00:57:39.808937 3504 log.go:181] (0xc000b18d10) Reply frame received for 3\nI0811 00:57:39.808966 3504 log.go:181] (0xc000b18d10) (0xc000826640) Create stream\nI0811 00:57:39.808975 3504 log.go:181] (0xc000b18d10) (0xc000826640) Stream added, broadcasting: 5\nI0811 00:57:39.809753 3504 log.go:181] (0xc000b18d10) Reply frame received for 5\nI0811 00:57:39.887004 3504 log.go:181] (0xc000b18d10) Data frame received for 3\nI0811 00:57:39.887033 3504 log.go:181] (0xc0008bd0e0) (3) Data frame handling\nI0811 00:57:39.887131 3504 log.go:181] (0xc000b18d10) Data frame received for 5\nI0811 00:57:39.887161 3504 log.go:181] (0xc000826640) (5) Data frame handling\nI0811 00:57:39.887182 3504 log.go:181] (0xc000826640) (5) Data frame sent\nI0811 00:57:39.887194 3504 log.go:181] (0xc000b18d10) Data frame received for 5\nI0811 00:57:39.887213 3504 log.go:181] (0xc000826640) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 32354\nConnection to 172.18.0.12 32354 port [tcp/32354] succeeded!\nI0811 00:57:39.891751 3504 log.go:181] (0xc000b18d10) Data frame received for 1\nI0811 00:57:39.891774 3504 log.go:181] (0xc000a2a460) (1) Data frame handling\nI0811 00:57:39.891786 3504 log.go:181] (0xc000a2a460) (1) Data frame sent\nI0811 00:57:39.891802 3504 log.go:181] (0xc000b18d10) (0xc000a2a460) Stream removed, broadcasting: 1\nI0811 00:57:39.891816 3504 log.go:181] (0xc000b18d10) Go away received\nI0811 00:57:39.892493 3504 log.go:181] (0xc000b18d10) (0xc000a2a460) Stream removed, broadcasting: 1\nI0811 00:57:39.892527 3504 log.go:181] (0xc000b18d10) (0xc0008bd0e0) Stream removed, broadcasting: 3\nI0811 00:57:39.892543 3504 log.go:181] (0xc000b18d10) (0xc000826640) Stream removed, broadcasting: 5\n" Aug 11 00:57:39.898: INFO: stdout: "" Aug 11 00:57:39.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-5136 execpod-affinitysn6wq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:32354/ ; done' Aug 11 00:57:40.274: INFO: stderr: "I0811 00:57:40.096937 3522 log.go:181] (0xc00018c420) (0xc000b30820) Create stream\nI0811 00:57:40.096991 3522 log.go:181] (0xc00018c420) (0xc000b30820) Stream added, broadcasting: 1\nI0811 00:57:40.099117 3522 log.go:181] (0xc00018c420) Reply frame received for 1\nI0811 00:57:40.099150 3522 log.go:181] (0xc00018c420) (0xc000b28320) Create stream\nI0811 00:57:40.099160 3522 log.go:181] (0xc00018c420) (0xc000b28320) Stream added, broadcasting: 3\nI0811 00:57:40.100405 3522 log.go:181] (0xc00018c420) Reply frame received for 3\nI0811 00:57:40.100463 3522 log.go:181] (0xc00018c420) (0xc000b5f4a0) Create stream\nI0811 00:57:40.100489 3522 log.go:181] (0xc00018c420) (0xc000b5f4a0) Stream added, broadcasting: 5\nI0811 00:57:40.101450 3522 log.go:181] (0xc00018c420) Reply frame received for 5\nI0811 00:57:40.178021 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.178051 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.178062 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.178082 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.178088 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.178095 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.180324 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.180342 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.180349 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.181198 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.181214 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.181228 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\n+ echo\nI0811 00:57:40.185188 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.185220 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.185230 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\nI0811 00:57:40.185238 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.185248 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.185267 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\nI0811 00:57:40.185276 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.185283 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.185294 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.185938 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.185956 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.185975 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.186354 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.186369 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.186376 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.186382 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.186406 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.186418 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.190285 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.190299 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.190306 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.190727 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.190763 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.190784 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.190828 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.190844 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.190855 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.195521 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.195545 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.195569 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.196008 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.196021 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.196026 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.196042 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.196053 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.196066 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.201029 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.201041 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.201051 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.201677 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.201698 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.201707 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.201716 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.201721 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.201727 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.205840 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.205857 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.205869 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.206293 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.206304 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.206311 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.206317 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.206332 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.206339 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.209387 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.209419 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.209442 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.209939 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.209953 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.209969 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.209990 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.209999 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.210018 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.215211 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.215231 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.215242 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.215718 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.215739 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.215758 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.215765 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.215776 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.215784 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.220466 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.220481 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.220494 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.221061 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.221080 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.221087 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.221106 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.221117 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.221125 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.226827 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.226839 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.226852 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.227792 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.227824 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.227835 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\n+ echo\n+ curl -qI0811 00:57:40.227868 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.227897 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.227909 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.227924 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.227930 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.227938 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\n -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.232202 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.232222 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.232238 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.233052 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.233068 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.233075 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.233086 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.233092 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.233097 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.241487 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.241512 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.241526 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.241546 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.241555 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.241563 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.241572 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.241598 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.241618 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.246594 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.246620 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.246637 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.247100 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.247130 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.247142 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.247157 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.247165 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.247176 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.252556 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.252577 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.252592 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.253342 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.253355 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.253370 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.253429 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.253445 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.253462 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.257493 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.257511 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.257531 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.258520 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.258550 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.258570 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.258594 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.258607 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.258633 3522 log.go:181] (0xc000b5f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.264527 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.264561 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.264601 3522 log.go:181] (0xc000b28320) (3) Data frame sent\nI0811 00:57:40.265361 3522 log.go:181] (0xc00018c420) Data frame received for 3\nI0811 00:57:40.265463 3522 log.go:181] (0xc000b28320) (3) Data frame handling\nI0811 00:57:40.265496 3522 log.go:181] (0xc00018c420) Data frame received for 5\nI0811 00:57:40.265522 3522 log.go:181] (0xc000b5f4a0) (5) Data frame handling\nI0811 00:57:40.267426 3522 log.go:181] (0xc00018c420) Data frame received for 1\nI0811 00:57:40.267467 3522 log.go:181] (0xc000b30820) (1) Data frame handling\nI0811 00:57:40.267485 3522 log.go:181] (0xc000b30820) (1) Data frame sent\nI0811 00:57:40.267502 3522 log.go:181] (0xc00018c420) (0xc000b30820) Stream removed, broadcasting: 1\nI0811 00:57:40.267651 3522 log.go:181] (0xc00018c420) Go away received\nI0811 00:57:40.268035 3522 log.go:181] (0xc00018c420) (0xc000b30820) Stream removed, broadcasting: 1\nI0811 00:57:40.268056 3522 log.go:181] (0xc00018c420) (0xc000b28320) Stream removed, broadcasting: 3\nI0811 00:57:40.268066 3522 log.go:181] (0xc00018c420) (0xc000b5f4a0) Stream removed, broadcasting: 5\n" Aug 11 00:57:40.275: INFO: stdout: "\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-ngq74\naffinity-nodeport-transition-99zw4\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-99zw4\naffinity-nodeport-transition-99zw4\naffinity-nodeport-transition-ngq74\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-ngq74\naffinity-nodeport-transition-ngq74\naffinity-nodeport-transition-ngq74\naffinity-nodeport-transition-99zw4" Aug 11 00:57:40.275: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.275: INFO: Received response from host: affinity-nodeport-transition-ngq74 Aug 11 00:57:40.275: INFO: Received response from host: affinity-nodeport-transition-99zw4 Aug 11 00:57:40.275: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.275: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.275: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.275: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.275: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.275: INFO: Received response from host: affinity-nodeport-transition-99zw4 Aug 11 00:57:40.275: INFO: Received response from host: affinity-nodeport-transition-99zw4 Aug 11 00:57:40.275: INFO: Received response from host: affinity-nodeport-transition-ngq74 Aug 11 00:57:40.275: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.275: INFO: Received response from host: affinity-nodeport-transition-ngq74 Aug 11 00:57:40.275: INFO: Received response from host: affinity-nodeport-transition-ngq74 Aug 11 00:57:40.275: INFO: Received response from host: affinity-nodeport-transition-ngq74 Aug 11 00:57:40.275: INFO: Received response from host: affinity-nodeport-transition-99zw4 Aug 11 00:57:40.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-5136 execpod-affinitysn6wq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:32354/ ; done' Aug 11 00:57:40.618: INFO: stderr: "I0811 00:57:40.426350 3540 log.go:181] (0xc001008f20) (0xc000e44960) Create stream\nI0811 00:57:40.426419 3540 log.go:181] (0xc001008f20) (0xc000e44960) Stream added, broadcasting: 1\nI0811 00:57:40.428691 3540 log.go:181] (0xc001008f20) Reply frame received for 1\nI0811 00:57:40.428827 3540 log.go:181] (0xc001008f20) (0xc000193b80) Create stream\nI0811 00:57:40.428846 3540 log.go:181] (0xc001008f20) (0xc000193b80) Stream added, broadcasting: 3\nI0811 00:57:40.429714 3540 log.go:181] (0xc001008f20) Reply frame received for 3\nI0811 00:57:40.429755 3540 log.go:181] (0xc001008f20) (0xc0002e60a0) Create stream\nI0811 00:57:40.429768 3540 log.go:181] (0xc001008f20) (0xc0002e60a0) Stream added, broadcasting: 5\nI0811 00:57:40.430656 3540 log.go:181] (0xc001008f20) Reply frame received for 5\nI0811 00:57:40.501577 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.501602 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\nI0811 00:57:40.501616 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.501644 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.501676 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.501703 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.509057 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.509070 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.509082 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.509676 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.509694 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.509699 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.509715 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.509729 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\nI0811 00:57:40.509741 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.515136 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.515148 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.515154 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.516046 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.516068 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.516088 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.516099 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.516110 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\nI0811 00:57:40.516124 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.522893 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.522917 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.522938 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.523772 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.523788 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.523800 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.523816 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.523830 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\nI0811 00:57:40.523841 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\nI0811 00:57:40.523851 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.523862 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I0811 00:57:40.523907 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\n http://172.18.0.14:32354/\nI0811 00:57:40.529091 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.529120 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.529149 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.529725 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.529750 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\nI0811 00:57:40.529767 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.529994 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.530013 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.530034 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.534188 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.534205 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.534231 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.535122 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.535144 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.535151 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.535162 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.535179 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\nI0811 00:57:40.535210 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\nI0811 00:57:40.535223 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.535229 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.535242 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\nI0811 00:57:40.542029 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.542059 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.542078 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.542651 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.542685 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.542711 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.542743 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.542774 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\nI0811 00:57:40.542801 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\nI0811 00:57:40.542811 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.542818 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.542839 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\nI0811 00:57:40.548148 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.548175 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.548198 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.548987 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.549032 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.549054 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.549088 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.549118 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\nI0811 00:57:40.549149 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.553669 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.553693 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.553712 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.554751 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.554779 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.554794 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.554811 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.554821 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\nI0811 00:57:40.554837 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.560957 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.560976 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.560990 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.562106 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.562147 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.562166 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.562188 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.562200 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\nI0811 00:57:40.562211 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.567981 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.568003 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.568012 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.568021 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.568027 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\nI0811 00:57:40.568044 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.573675 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.573691 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.573699 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.574642 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.574667 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.574691 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\nI0811 00:57:40.574704 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/I0811 00:57:40.574715 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.574724 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\nI0811 00:57:40.574734 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\n\nI0811 00:57:40.574749 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.574761 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.579278 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.579303 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.579322 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.580010 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.580042 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.580062 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.580095 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.580121 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\nI0811 00:57:40.580156 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.586091 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.586113 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.586145 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.586629 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.586655 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\nI0811 00:57:40.586695 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.586807 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.586829 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.586840 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.593283 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.593319 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.593356 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.593996 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.594011 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\nI0811 00:57:40.594018 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.594032 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.594047 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.594054 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.599562 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.599587 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.599624 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.600028 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.600057 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.600084 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.600129 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\nI0811 00:57:40.600166 3540 log.go:181] (0xc0002e60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32354/\nI0811 00:57:40.600208 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.607236 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.607254 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.607265 3540 log.go:181] (0xc000193b80) (3) Data frame sent\nI0811 00:57:40.608202 3540 log.go:181] (0xc001008f20) Data frame received for 5\nI0811 00:57:40.608233 3540 log.go:181] (0xc0002e60a0) (5) Data frame handling\nI0811 00:57:40.608259 3540 log.go:181] (0xc001008f20) Data frame received for 3\nI0811 00:57:40.608279 3540 log.go:181] (0xc000193b80) (3) Data frame handling\nI0811 00:57:40.610703 3540 log.go:181] (0xc001008f20) Data frame received for 1\nI0811 00:57:40.610741 3540 log.go:181] (0xc000e44960) (1) Data frame handling\nI0811 00:57:40.610775 3540 log.go:181] (0xc000e44960) (1) Data frame sent\nI0811 00:57:40.610799 3540 log.go:181] (0xc001008f20) (0xc000e44960) Stream removed, broadcasting: 1\nI0811 00:57:40.610822 3540 log.go:181] (0xc001008f20) Go away received\nI0811 00:57:40.611322 3540 log.go:181] (0xc001008f20) (0xc000e44960) Stream removed, broadcasting: 1\nI0811 00:57:40.611357 3540 log.go:181] (0xc001008f20) (0xc000193b80) Stream removed, broadcasting: 3\nI0811 00:57:40.611379 3540 log.go:181] (0xc001008f20) (0xc0002e60a0) Stream removed, broadcasting: 5\n" Aug 11 00:57:40.619: INFO: stdout: "\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl\naffinity-nodeport-transition-b8dxl" Aug 11 00:57:40.619: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.619: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.619: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.619: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.619: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.619: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.619: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.619: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.619: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.619: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.619: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.619: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.619: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.619: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.619: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.619: INFO: Received response from host: affinity-nodeport-transition-b8dxl Aug 11 00:57:40.619: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-5136, will wait for the garbage collector to delete the pods Aug 11 00:57:41.112: INFO: Deleting ReplicationController affinity-nodeport-transition took: 341.151216ms Aug 11 00:57:41.712: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 600.215948ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:57:53.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5136" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:29.077 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":271,"skipped":4428,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:57:53.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-740 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 11 00:57:54.072: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 11 00:57:54.157: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 11 00:57:56.161: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 11 00:57:58.161: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 11 00:58:00.161: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 11 00:58:02.161: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 11 00:58:04.162: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 11 00:58:06.161: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 11 00:58:08.175: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 11 00:58:08.179: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 11 00:58:10.183: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 11 00:58:12.187: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 11 00:58:16.241: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.217:8080/dial?request=hostname&protocol=udp&host=10.244.1.220&port=8081&tries=1'] Namespace:pod-network-test-740 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:58:16.241: INFO: >>> kubeConfig: /root/.kube/config I0811 00:58:16.280433 7 log.go:181] (0xc000525ef0) (0xc0032ccc80) Create stream I0811 00:58:16.280509 7 log.go:181] (0xc000525ef0) (0xc0032ccc80) Stream added, broadcasting: 1 I0811 00:58:16.283429 7 log.go:181] (0xc000525ef0) Reply frame received for 1 I0811 00:58:16.283492 7 log.go:181] (0xc000525ef0) (0xc0032ccd20) Create stream I0811 00:58:16.283535 7 log.go:181] (0xc000525ef0) (0xc0032ccd20) Stream added, broadcasting: 3 I0811 00:58:16.284930 7 log.go:181] (0xc000525ef0) Reply frame received for 3 I0811 00:58:16.284982 7 log.go:181] (0xc000525ef0) (0xc003725e00) Create stream I0811 00:58:16.284998 7 log.go:181] (0xc000525ef0) (0xc003725e00) Stream added, broadcasting: 5 I0811 00:58:16.286227 7 log.go:181] (0xc000525ef0) Reply frame received for 5 I0811 00:58:16.361482 7 log.go:181] (0xc000525ef0) Data frame received for 3 I0811 00:58:16.361521 7 log.go:181] (0xc0032ccd20) (3) Data frame handling I0811 00:58:16.361544 7 log.go:181] (0xc0032ccd20) (3) Data frame sent I0811 00:58:16.362178 7 log.go:181] (0xc000525ef0) Data frame received for 3 I0811 00:58:16.362206 7 log.go:181] (0xc0032ccd20) (3) Data frame handling I0811 00:58:16.362475 7 log.go:181] (0xc000525ef0) Data frame received for 5 I0811 00:58:16.362502 7 log.go:181] (0xc003725e00) (5) Data frame handling I0811 00:58:16.365483 7 log.go:181] (0xc000525ef0) Data frame received for 1 I0811 00:58:16.365507 7 log.go:181] (0xc0032ccc80) (1) Data frame handling I0811 00:58:16.365525 7 log.go:181] (0xc0032ccc80) (1) Data frame sent I0811 00:58:16.365788 7 log.go:181] (0xc000525ef0) (0xc0032ccc80) Stream removed, broadcasting: 1 I0811 00:58:16.365814 7 log.go:181] (0xc000525ef0) Go away received I0811 00:58:16.365893 7 log.go:181] (0xc000525ef0) (0xc0032ccc80) Stream removed, broadcasting: 1 I0811 00:58:16.365912 7 log.go:181] (0xc000525ef0) (0xc0032ccd20) Stream removed, broadcasting: 3 I0811 00:58:16.365924 7 log.go:181] (0xc000525ef0) (0xc003725e00) Stream removed, broadcasting: 5 Aug 11 00:58:16.365: INFO: Waiting for responses: map[] Aug 11 00:58:16.368: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.217:8080/dial?request=hostname&protocol=udp&host=10.244.2.216&port=8081&tries=1'] Namespace:pod-network-test-740 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 00:58:16.368: INFO: >>> kubeConfig: /root/.kube/config I0811 00:58:16.393561 7 log.go:181] (0xc002ff0790) (0xc0032cd040) Create stream I0811 00:58:16.393602 7 log.go:181] (0xc002ff0790) (0xc0032cd040) Stream added, broadcasting: 1 I0811 00:58:16.395202 7 log.go:181] (0xc002ff0790) Reply frame received for 1 I0811 00:58:16.395227 7 log.go:181] (0xc002ff0790) (0xc003725ea0) Create stream I0811 00:58:16.395238 7 log.go:181] (0xc002ff0790) (0xc003725ea0) Stream added, broadcasting: 3 I0811 00:58:16.395919 7 log.go:181] (0xc002ff0790) Reply frame received for 3 I0811 00:58:16.395949 7 log.go:181] (0xc002ff0790) (0xc000e38a00) Create stream I0811 00:58:16.395961 7 log.go:181] (0xc002ff0790) (0xc000e38a00) Stream added, broadcasting: 5 I0811 00:58:16.396703 7 log.go:181] (0xc002ff0790) Reply frame received for 5 I0811 00:58:16.465120 7 log.go:181] (0xc002ff0790) Data frame received for 3 I0811 00:58:16.465160 7 log.go:181] (0xc003725ea0) (3) Data frame handling I0811 00:58:16.465188 7 log.go:181] (0xc003725ea0) (3) Data frame sent I0811 00:58:16.466139 7 log.go:181] (0xc002ff0790) Data frame received for 3 I0811 00:58:16.466165 7 log.go:181] (0xc003725ea0) (3) Data frame handling I0811 00:58:16.466319 7 log.go:181] (0xc002ff0790) Data frame received for 5 I0811 00:58:16.466390 7 log.go:181] (0xc000e38a00) (5) Data frame handling I0811 00:58:16.468551 7 log.go:181] (0xc002ff0790) Data frame received for 1 I0811 00:58:16.468579 7 log.go:181] (0xc0032cd040) (1) Data frame handling I0811 00:58:16.468591 7 log.go:181] (0xc0032cd040) (1) Data frame sent I0811 00:58:16.468702 7 log.go:181] (0xc002ff0790) (0xc0032cd040) Stream removed, broadcasting: 1 I0811 00:58:16.468892 7 log.go:181] (0xc002ff0790) (0xc0032cd040) Stream removed, broadcasting: 1 I0811 00:58:16.468913 7 log.go:181] (0xc002ff0790) (0xc003725ea0) Stream removed, broadcasting: 3 I0811 00:58:16.468920 7 log.go:181] (0xc002ff0790) (0xc000e38a00) Stream removed, broadcasting: 5 Aug 11 00:58:16.468: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:58:16.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0811 00:58:16.468998 7 log.go:181] (0xc002ff0790) Go away received STEP: Destroying namespace "pod-network-test-740" for this suite. • [SLOW TEST:22.505 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":303,"completed":272,"skipped":4469,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:58:16.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 11 00:58:16.546: INFO: Waiting up to 5m0s for pod "pod-075ba5cd-bd5c-4773-b085-217964a42626" in namespace "emptydir-6726" to be "Succeeded or Failed" Aug 11 00:58:16.562: INFO: Pod "pod-075ba5cd-bd5c-4773-b085-217964a42626": Phase="Pending", Reason="", readiness=false. Elapsed: 16.558404ms Aug 11 00:58:18.566: INFO: Pod "pod-075ba5cd-bd5c-4773-b085-217964a42626": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020754747s Aug 11 00:58:20.570: INFO: Pod "pod-075ba5cd-bd5c-4773-b085-217964a42626": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024746666s STEP: Saw pod success Aug 11 00:58:20.570: INFO: Pod "pod-075ba5cd-bd5c-4773-b085-217964a42626" satisfied condition "Succeeded or Failed" Aug 11 00:58:20.573: INFO: Trying to get logs from node latest-worker2 pod pod-075ba5cd-bd5c-4773-b085-217964a42626 container test-container: STEP: delete the pod Aug 11 00:58:20.607: INFO: Waiting for pod pod-075ba5cd-bd5c-4773-b085-217964a42626 to disappear Aug 11 00:58:20.696: INFO: Pod pod-075ba5cd-bd5c-4773-b085-217964a42626 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:58:20.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6726" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":273,"skipped":4475,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:58:20.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:58:20.753: INFO: Waiting up to 5m0s for pod "busybox-user-65534-59911f7c-0957-40aa-8e40-27205fbbaf00" in namespace "security-context-test-1808" to be "Succeeded or Failed" Aug 11 00:58:20.773: INFO: Pod "busybox-user-65534-59911f7c-0957-40aa-8e40-27205fbbaf00": Phase="Pending", Reason="", readiness=false. Elapsed: 19.693665ms Aug 11 00:58:23.002: INFO: Pod "busybox-user-65534-59911f7c-0957-40aa-8e40-27205fbbaf00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248336453s Aug 11 00:58:25.032: INFO: Pod "busybox-user-65534-59911f7c-0957-40aa-8e40-27205fbbaf00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.279012655s Aug 11 00:58:27.035: INFO: Pod "busybox-user-65534-59911f7c-0957-40aa-8e40-27205fbbaf00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.281526932s Aug 11 00:58:27.035: INFO: Pod "busybox-user-65534-59911f7c-0957-40aa-8e40-27205fbbaf00" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:58:27.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1808" for this suite. • [SLOW TEST:6.336 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":274,"skipped":4498,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:58:27.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:58:38.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9609" for this suite. • [SLOW TEST:11.211 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":303,"completed":275,"skipped":4499,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:58:38.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 11 00:58:38.354: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40ed2726-9978-455c-82e0-de60e028dc31" in namespace "projected-5019" to be "Succeeded or Failed" Aug 11 00:58:38.357: INFO: Pod "downwardapi-volume-40ed2726-9978-455c-82e0-de60e028dc31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.698684ms Aug 11 00:58:40.361: INFO: Pod "downwardapi-volume-40ed2726-9978-455c-82e0-de60e028dc31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007025339s Aug 11 00:58:42.365: INFO: Pod "downwardapi-volume-40ed2726-9978-455c-82e0-de60e028dc31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011200888s STEP: Saw pod success Aug 11 00:58:42.365: INFO: Pod "downwardapi-volume-40ed2726-9978-455c-82e0-de60e028dc31" satisfied condition "Succeeded or Failed" Aug 11 00:58:42.368: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-40ed2726-9978-455c-82e0-de60e028dc31 container client-container: STEP: delete the pod Aug 11 00:58:42.579: INFO: Waiting for pod downwardapi-volume-40ed2726-9978-455c-82e0-de60e028dc31 to disappear Aug 11 00:58:42.661: INFO: Pod downwardapi-volume-40ed2726-9978-455c-82e0-de60e028dc31 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:58:42.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5019" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":276,"skipped":4561,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:58:42.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:58:46.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1686" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":303,"completed":277,"skipped":4591,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:58:46.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:58:47.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5994" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":303,"completed":278,"skipped":4598,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:58:47.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:58:47.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-4516" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":303,"completed":279,"skipped":4616,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:58:47.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:58:51.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8825" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":280,"skipped":4626,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:58:51.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 11 00:58:51.429: INFO: Creating deployment "webserver-deployment" Aug 11 00:58:51.436: INFO: Waiting for observed generation 1 Aug 11 00:58:53.548: INFO: Waiting for all required pods to come up Aug 11 00:58:53.611: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 11 00:59:03.623: INFO: Waiting for deployment "webserver-deployment" to complete Aug 11 00:59:03.629: INFO: Updating deployment "webserver-deployment" with a non-existent image Aug 11 00:59:03.635: INFO: Updating deployment webserver-deployment Aug 11 00:59:03.635: INFO: Waiting for observed generation 2 Aug 11 00:59:05.718: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 11 00:59:05.720: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 11 00:59:05.723: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 11 00:59:05.730: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 11 00:59:05.730: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 11 00:59:05.732: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 11 00:59:05.735: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Aug 11 00:59:05.735: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Aug 11 00:59:05.741: INFO: Updating deployment webserver-deployment Aug 11 00:59:05.741: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Aug 11 00:59:06.199: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 11 00:59:06.520: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 11 00:59:08.761: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3235 /apis/apps/v1/namespaces/deployment-3235/deployments/webserver-deployment af9253ba-0eaf-4604-b821-bc6305038586 6065520 3 2020-08-11 00:58:51 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-11 00:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-11 00:59:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004b0dd78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-11 00:59:05 +0000 UTC,LastTransitionTime:2020-08-11 00:59:05 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-08-11 00:59:07 +0000 UTC,LastTransitionTime:2020-08-11 00:58:51 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Aug 11 00:59:08.857: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-3235 /apis/apps/v1/namespaces/deployment-3235/replicasets/webserver-deployment-795d758f88 7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84 6065518 3 2020-08-11 00:59:03 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment af9253ba-0eaf-4604-b821-bc6305038586 0xc0053ae1f7 0xc0053ae1f8}] [] [{kube-controller-manager Update apps/v1 2020-08-11 00:59:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af9253ba-0eaf-4604-b821-bc6305038586\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0053ae278 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 11 00:59:08.857: INFO: All old ReplicaSets of Deployment "webserver-deployment": Aug 11 00:59:08.857: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-3235 /apis/apps/v1/namespaces/deployment-3235/replicasets/webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 6065494 3 2020-08-11 00:58:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment af9253ba-0eaf-4604-b821-bc6305038586 0xc0053ae2d7 0xc0053ae2d8}] [] [{kube-controller-manager Update apps/v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af9253ba-0eaf-4604-b821-bc6305038586\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0053ae348 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Aug 11 00:59:09.127: INFO: Pod "webserver-deployment-795d758f88-2rnnk" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-2rnnk webserver-deployment-795d758f88- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-795d758f88-2rnnk 1e9b85a8-19b3-4e34-a685-577e6daa9f00 6065497 0 2020-08-11 00:59:05 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84 0xc00538e8b7 0xc00538e8b8}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-11 00:59:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.127: INFO: Pod "webserver-deployment-795d758f88-7t6t4" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-7t6t4 webserver-deployment-795d758f88- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-795d758f88-7t6t4 4f02553b-2409-4cf9-a5ab-819a8e868080 6065549 0 2020-08-11 00:59:06 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84 0xc00538ea67 0xc00538ea68}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-11 00:59:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.127: INFO: Pod "webserver-deployment-795d758f88-8kv7l" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-8kv7l webserver-deployment-795d758f88- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-795d758f88-8kv7l c81dc4f5-1e64-48a2-acd8-dbddfed8d9c9 6065517 0 2020-08-11 00:59:06 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84 0xc00538ec17 0xc00538ec18}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-11 00:59:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.127: INFO: Pod "webserver-deployment-795d758f88-b669z" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-b669z webserver-deployment-795d758f88- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-795d758f88-b669z 983e69d2-e2a1-4ee7-a158-2c793db25a88 6065390 0 2020-08-11 00:59:03 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84 0xc00538edc7 0xc00538edc8}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-11 00:59:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.128: INFO: Pod "webserver-deployment-795d758f88-bc4j4" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-bc4j4 webserver-deployment-795d758f88- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-795d758f88-bc4j4 db4bec41-f9ce-4fcd-af2a-e0fe2cd60c47 6065543 0 2020-08-11 00:59:06 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84 0xc00538ef77 0xc00538ef78}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-11 00:59:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.128: INFO: Pod "webserver-deployment-795d758f88-kvngl" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-kvngl webserver-deployment-795d758f88- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-795d758f88-kvngl 3c896c33-64be-49cd-9ba1-7dd14f36f16e 6065559 0 2020-08-11 00:59:06 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84 0xc00538f127 0xc00538f128}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-11 00:59:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.128: INFO: Pod "webserver-deployment-795d758f88-m7p2p" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-m7p2p webserver-deployment-795d758f88- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-795d758f88-m7p2p b5feb983-e169-4dca-b568-e46a1066b470 6065537 0 2020-08-11 00:59:06 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84 0xc00538f2d7 0xc00538f2d8}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-11 00:59:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.128: INFO: Pod "webserver-deployment-795d758f88-pp5jq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-pp5jq webserver-deployment-795d758f88- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-795d758f88-pp5jq 950162fd-a8a0-4caa-94cd-21ffd149a92c 6065535 0 2020-08-11 00:59:06 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84 0xc00538f487 0xc00538f488}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-11 00:59:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.129: INFO: Pod "webserver-deployment-795d758f88-qd8tr" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qd8tr webserver-deployment-795d758f88- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-795d758f88-qd8tr e7c12a6d-2807-4354-a855-e3e444a0c0a9 6065420 0 2020-08-11 00:59:03 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84 0xc00538f637 0xc00538f638}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-11 00:59:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.129: INFO: Pod "webserver-deployment-795d758f88-qmlr6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qmlr6 webserver-deployment-795d758f88- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-795d758f88-qmlr6 c82bbedb-0005-4e13-97d6-6fee983a0894 6065541 0 2020-08-11 00:59:06 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84 0xc00538f7e7 0xc00538f7e8}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-11 00:59:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.129: INFO: Pod "webserver-deployment-795d758f88-t8kgf" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-t8kgf webserver-deployment-795d758f88- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-795d758f88-t8kgf aec31c20-b7bb-4c4c-81df-b7bc2cf5f99b 6065557 0 2020-08-11 00:59:03 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84 0xc00538f997 0xc00538f998}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.228\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.228,StartTime:2020-08-11 00:59:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.228,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.129: INFO: Pod "webserver-deployment-795d758f88-zjc9m" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zjc9m webserver-deployment-795d758f88- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-795d758f88-zjc9m 491510d8-7435-4812-85df-cc3925ca6bc3 6065413 0 2020-08-11 00:59:03 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84 0xc00538fb77 0xc00538fb78}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-11 00:59:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.130: INFO: Pod "webserver-deployment-795d758f88-zzzzs" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zzzzs webserver-deployment-795d758f88- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-795d758f88-zzzzs bef58df9-ebef-43a5-a56a-6f39354eec47 6065397 0 2020-08-11 00:59:03 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84 0xc00538fd37 0xc00538fd38}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b9a1ae6-5be1-43c8-92c7-8962b0dc3f84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-11 00:59:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.130: INFO: Pod "webserver-deployment-dd94f59b7-44dg6" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-44dg6 webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-44dg6 17cbf34e-185e-4562-a01c-ad60437e9a67 6065464 0 2020-08-11 00:59:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc00538fee7 0xc00538fee8}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-11 00:59:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.130: INFO: Pod "webserver-deployment-dd94f59b7-6kt9w" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-6kt9w webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-6kt9w 521cb1ac-a448-4c4a-929c-afed99742406 6065544 0 2020-08-11 00:59:06 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc004168077 0xc004168078}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-11 00:59:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.130: INFO: Pod "webserver-deployment-dd94f59b7-86rgn" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-86rgn webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-86rgn 92198f68-df0f-436c-bf90-00eaa043e6f8 6065554 0 2020-08-11 00:59:06 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc004168217 0xc004168218}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-11 00:59:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.131: INFO: Pod "webserver-deployment-dd94f59b7-8sxfq" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-8sxfq webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-8sxfq c023d2a9-ee68-49de-b327-674e4f8fee9a 6065525 0 2020-08-11 00:59:06 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc0041683a7 0xc0041683a8}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-11 00:59:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.131: INFO: Pod "webserver-deployment-dd94f59b7-8vl9w" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-8vl9w webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-8vl9w 72e4a7d2-6afb-4d19-899c-4675b282525b 6065542 0 2020-08-11 00:59:06 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc004168697 0xc004168698}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-11 00:59:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.131: INFO: Pod "webserver-deployment-dd94f59b7-8zxkl" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-8zxkl webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-8zxkl 5dbfefde-3daf-4555-9289-37c0531abbb4 6065357 0 2020-08-11 00:58:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc004168827 0xc004168828}] [] [{kube-controller-manager Update v1 2020-08-11 00:58:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.227\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.227,StartTime:2020-08-11 00:58:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 00:59:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8608f170f62f76af627a629ff8816bf42ba7b622dcf9d3457741e28bea2ec1d8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.227,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.131: INFO: Pod "webserver-deployment-dd94f59b7-b9n7l" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-b9n7l webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-b9n7l acae6a3a-5aec-41c7-a6df-323249a2eb7d 6065305 0 2020-08-11 00:58:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc0041689d7 0xc0041689d8}] [] [{kube-controller-manager Update v1 2020-08-11 00:58:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:58:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.221\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.221,StartTime:2020-08-11 00:58:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 00:58:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e77a19fb089b8201fec7d0063e6ce1e80fcd2a243cf7aeda1e6430f048a0a44a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.221,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.132: INFO: Pod "webserver-deployment-dd94f59b7-c2hvp" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-c2hvp webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-c2hvp 8d3496e1-82e5-4498-971b-fff9c46235e6 6065317 0 2020-08-11 00:58:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc004168b87 0xc004168b88}] [] [{kube-controller-manager Update v1 2020-08-11 00:58:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.224\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.224,StartTime:2020-08-11 00:58:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 00:58:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://25e6eadb40b94f15f51a875d205df868c9d7369dcddb7851a77e64265fc541e1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.224,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.132: INFO: Pod "webserver-deployment-dd94f59b7-d9z8m" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-d9z8m webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-d9z8m ddcbe851-b692-47e8-bdc3-006be6aeca74 6065340 0 2020-08-11 00:58:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc004168d37 0xc004168d38}] [] [{kube-controller-manager Update v1 2020-08-11 00:58:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.226\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.226,StartTime:2020-08-11 00:58:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 00:59:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://03744a38928dabf2f85ff76f2f18e7794c1f13c899fa8671362d6753b6b64e64,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.226,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.132: INFO: Pod "webserver-deployment-dd94f59b7-fcb5t" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-fcb5t webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-fcb5t bd373471-68b2-4cfc-a0bd-6c0787c83905 6065326 0 2020-08-11 00:58:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc004168ee7 0xc004168ee8}] [] [{kube-controller-manager Update v1 2020-08-11 00:58:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.222\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.222,StartTime:2020-08-11 00:58:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 00:59:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f9f0c8609052ef743b9745ce9096620f154c6f5de17707729cb05741aff97832,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.222,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.133: INFO: Pod "webserver-deployment-dd94f59b7-k5wms" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-k5wms webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-k5wms 75210ec5-6855-4035-b6f0-96b790927924 6065532 0 2020-08-11 00:59:06 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc004169097 0xc004169098}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-11 00:59:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.133: INFO: Pod "webserver-deployment-dd94f59b7-k9nwn" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-k9nwn webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-k9nwn 15781a6a-039b-4568-b2f0-5047e3567c63 6065515 0 2020-08-11 00:59:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc004169227 0xc004169228}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-11 00:59:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.133: INFO: Pod "webserver-deployment-dd94f59b7-knwxm" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-knwxm webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-knwxm 427bccc2-c948-4129-b0bc-6e44c22165f5 6065540 0 2020-08-11 00:59:06 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc0041693b7 0xc0041693b8}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-11 00:59:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.133: INFO: Pod "webserver-deployment-dd94f59b7-kp4zw" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-kp4zw webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-kp4zw 0419c660-5fed-43a6-b5d1-00d4c4ab0e01 6065546 0 2020-08-11 00:59:06 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc004169567 0xc004169568}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-11 00:59:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.134: INFO: Pod "webserver-deployment-dd94f59b7-nqpt6" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-nqpt6 webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-nqpt6 03209895-e303-4bb3-a39b-d4f7b3ede3ef 6065343 0 2020-08-11 00:58:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc0041696f7 0xc0041696f8}] [] [{kube-controller-manager Update v1 2020-08-11 00:58:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.225\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.225,StartTime:2020-08-11 00:58:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 00:58:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://987a182886b4a84c9d38b805ae3f5fd39dba6683d056bcab5cb9a7f204bbefc4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.225,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.134: INFO: Pod "webserver-deployment-dd94f59b7-pdv92" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pdv92 webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-pdv92 3292b52f-e797-4315-8305-921c72df7d16 6065522 0 2020-08-11 00:59:06 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc0041698d7 0xc0041698d8}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-11 00:59:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.134: INFO: Pod "webserver-deployment-dd94f59b7-rhfpr" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rhfpr webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-rhfpr f8e7ae0e-0101-40f0-a2ed-61e6495306d6 6065508 0 2020-08-11 00:59:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc004169a77 0xc004169a78}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-11 00:59:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.134: INFO: Pod "webserver-deployment-dd94f59b7-vrlgs" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vrlgs webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-vrlgs e885bc27-c145-468f-a8dc-64c6fcca3bb9 6065347 0 2020-08-11 00:58:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc004169c07 0xc004169c08}] [] [{kube-controller-manager Update v1 2020-08-11 00:58:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.224\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.224,StartTime:2020-08-11 00:58:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 00:59:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://76bbb83b59f13aa58dd3634b02ff2fda582ce77d0f9e8c40d5176c42d18bc697,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.224,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.135: INFO: Pod "webserver-deployment-dd94f59b7-vzt5g" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vzt5g webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-vzt5g e6b69994-dd37-4746-add6-49f39ece522a 6065530 0 2020-08-11 00:59:06 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc004169db7 0xc004169db8}] [] [{kube-controller-manager Update v1 2020-08-11 00:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-11 00:59:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 11 00:59:09.135: INFO: Pod "webserver-deployment-dd94f59b7-zxkb8" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-zxkb8 webserver-deployment-dd94f59b7- deployment-3235 /api/v1/namespaces/deployment-3235/pods/webserver-deployment-dd94f59b7-zxkb8 b3714009-0be9-42b4-b242-2e11d7876afe 6065287 0 2020-08-11 00:58:51 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 d4c4f176-d375-43b5-b35c-6b86ad696507 0xc004169f47 0xc004169f48}] [] [{kube-controller-manager Update v1 2020-08-11 00:58:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4c4f176-d375-43b5-b35c-6b86ad696507\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-11 00:58:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.223\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6h562,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6h562,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6h562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-11 00:58:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.223,StartTime:2020-08-11 00:58:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-11 00:58:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://14fe396570c22ee9e406128f9b61c5ad4468245636fade0c2e060222ae41cc12,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.223,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:59:09.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3235" for this suite. • [SLOW TEST:18.487 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":303,"completed":281,"skipped":4638,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:59:09.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:59:11.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9622" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":303,"completed":282,"skipped":4651,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:59:11.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl can dry-run update Pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 11 00:59:12.478: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-4224' Aug 11 00:59:13.427: INFO: stderr: "" Aug 11 00:59:13.427: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Aug 11 00:59:13.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-4224' Aug 11 00:59:13.631: INFO: stderr: "" Aug 11 00:59:13.631: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-11T00:59:13Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-11T00:59:12Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-4224\",\n \"resourceVersion\": \"6065593\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-4224/pods/e2e-test-httpd-pod\",\n \"uid\": \"97bb81c3-a90a-46cf-b6d2-fdf09088f3f9\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-5226g\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-5226g\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-5226g\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-11T00:59:13Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\"\n }\n}\n" Aug 11 00:59:13.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-4224' Aug 11 00:59:14.744: INFO: stderr: "W0811 00:59:13.702097 3593 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Aug 11 00:59:14.744: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Aug 11 00:59:14.815: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4224' Aug 11 00:59:24.778: INFO: stderr: "" Aug 11 00:59:24.778: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:59:24.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4224" for this suite. • [SLOW TEST:13.661 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:919 should check if kubectl can dry-run update Pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":303,"completed":283,"skipped":4654,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:59:24.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 11 00:59:26.079: INFO: Waiting up to 5m0s for pod "pod-7194aa6c-6080-4028-83a6-9712e5363a73" in namespace "emptydir-3723" to be "Succeeded or Failed" Aug 11 00:59:26.153: INFO: Pod "pod-7194aa6c-6080-4028-83a6-9712e5363a73": Phase="Pending", Reason="", readiness=false. Elapsed: 73.806827ms Aug 11 00:59:28.434: INFO: Pod "pod-7194aa6c-6080-4028-83a6-9712e5363a73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.354964748s Aug 11 00:59:30.440: INFO: Pod "pod-7194aa6c-6080-4028-83a6-9712e5363a73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.361101213s Aug 11 00:59:32.633: INFO: Pod "pod-7194aa6c-6080-4028-83a6-9712e5363a73": Phase="Running", Reason="", readiness=true. Elapsed: 6.554039931s Aug 11 00:59:34.860: INFO: Pod "pod-7194aa6c-6080-4028-83a6-9712e5363a73": Phase="Running", Reason="", readiness=true. Elapsed: 8.780787308s Aug 11 00:59:37.003: INFO: Pod "pod-7194aa6c-6080-4028-83a6-9712e5363a73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.924084142s STEP: Saw pod success Aug 11 00:59:37.003: INFO: Pod "pod-7194aa6c-6080-4028-83a6-9712e5363a73" satisfied condition "Succeeded or Failed" Aug 11 00:59:37.099: INFO: Trying to get logs from node latest-worker2 pod pod-7194aa6c-6080-4028-83a6-9712e5363a73 container test-container: STEP: delete the pod Aug 11 00:59:37.566: INFO: Waiting for pod pod-7194aa6c-6080-4028-83a6-9712e5363a73 to disappear Aug 11 00:59:37.569: INFO: Pod pod-7194aa6c-6080-4028-83a6-9712e5363a73 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:59:37.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3723" for this suite. • [SLOW TEST:12.748 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":284,"skipped":4687,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:59:37.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 11 00:59:39.080: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 11 00:59:41.170: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704379, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704379, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704379, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704379, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 11 00:59:43.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704379, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704379, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704379, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704379, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 11 00:59:46.546: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:59:46.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8321" for this suite. STEP: Destroying namespace "webhook-8321-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.621 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":303,"completed":285,"skipped":4693,"failed":0} [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:59:47.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 11 00:59:47.641: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c57b08c-8636-4d1d-8a7c-6e2914183940" in namespace "projected-1663" to be "Succeeded or Failed" Aug 11 00:59:47.978: INFO: Pod "downwardapi-volume-1c57b08c-8636-4d1d-8a7c-6e2914183940": Phase="Pending", Reason="", readiness=false. Elapsed: 336.892051ms Aug 11 00:59:49.982: INFO: Pod "downwardapi-volume-1c57b08c-8636-4d1d-8a7c-6e2914183940": Phase="Pending", Reason="", readiness=false. Elapsed: 2.340861712s Aug 11 00:59:51.990: INFO: Pod "downwardapi-volume-1c57b08c-8636-4d1d-8a7c-6e2914183940": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.348540407s STEP: Saw pod success Aug 11 00:59:51.990: INFO: Pod "downwardapi-volume-1c57b08c-8636-4d1d-8a7c-6e2914183940" satisfied condition "Succeeded or Failed" Aug 11 00:59:52.008: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-1c57b08c-8636-4d1d-8a7c-6e2914183940 container client-container: STEP: delete the pod Aug 11 00:59:52.077: INFO: Waiting for pod downwardapi-volume-1c57b08c-8636-4d1d-8a7c-6e2914183940 to disappear Aug 11 00:59:52.346: INFO: Pod downwardapi-volume-1c57b08c-8636-4d1d-8a7c-6e2914183940 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:59:52.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1663" for this suite. • [SLOW TEST:5.093 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":286,"skipped":4693,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:59:52.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 11 00:59:52.436: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19ed1e43-20d5-45d4-83b7-6ad55689d224" in namespace "downward-api-5836" to be "Succeeded or Failed" Aug 11 00:59:52.536: INFO: Pod "downwardapi-volume-19ed1e43-20d5-45d4-83b7-6ad55689d224": Phase="Pending", Reason="", readiness=false. Elapsed: 99.640057ms Aug 11 00:59:54.584: INFO: Pod "downwardapi-volume-19ed1e43-20d5-45d4-83b7-6ad55689d224": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14755727s Aug 11 00:59:56.588: INFO: Pod "downwardapi-volume-19ed1e43-20d5-45d4-83b7-6ad55689d224": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151662212s STEP: Saw pod success Aug 11 00:59:56.588: INFO: Pod "downwardapi-volume-19ed1e43-20d5-45d4-83b7-6ad55689d224" satisfied condition "Succeeded or Failed" Aug 11 00:59:56.591: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-19ed1e43-20d5-45d4-83b7-6ad55689d224 container client-container: STEP: delete the pod Aug 11 00:59:56.641: INFO: Waiting for pod downwardapi-volume-19ed1e43-20d5-45d4-83b7-6ad55689d224 to disappear Aug 11 00:59:56.648: INFO: Pod downwardapi-volume-19ed1e43-20d5-45d4-83b7-6ad55689d224 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 00:59:56.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5836" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":287,"skipped":4694,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 00:59:56.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Aug 11 00:59:56.788: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Aug 11 00:59:57.450: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Aug 11 00:59:59.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704397, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704397, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704397, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704397, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67c46cd746\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 11 01:00:01.684: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704397, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704397, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704397, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704397, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67c46cd746\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 11 01:00:04.420: INFO: Waited 728.480692ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 01:00:04.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9825" for this suite. • [SLOW TEST:8.640 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":303,"completed":288,"skipped":4752,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 01:00:05.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0811 01:00:46.538417 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 11 01:01:48.557: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Aug 11 01:01:48.557: INFO: Deleting pod "simpletest.rc-28msg" in namespace "gc-2903" Aug 11 01:01:48.635: INFO: Deleting pod "simpletest.rc-cb97b" in namespace "gc-2903" Aug 11 01:01:48.700: INFO: Deleting pod "simpletest.rc-lhn9v" in namespace "gc-2903" Aug 11 01:01:48.748: INFO: Deleting pod "simpletest.rc-p6q9n" in namespace "gc-2903" Aug 11 01:01:49.030: INFO: Deleting pod "simpletest.rc-qgs9k" in namespace "gc-2903" Aug 11 01:01:49.223: INFO: Deleting pod "simpletest.rc-slfw2" in namespace "gc-2903" Aug 11 01:01:49.288: INFO: Deleting pod "simpletest.rc-tfddl" in namespace "gc-2903" Aug 11 01:01:49.683: INFO: Deleting pod "simpletest.rc-v9rwh" in namespace "gc-2903" Aug 11 01:01:49.743: INFO: Deleting pod "simpletest.rc-w7rq7" in namespace "gc-2903" Aug 11 01:01:50.045: INFO: Deleting pod "simpletest.rc-z4hfn" in namespace "gc-2903" [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 01:01:50.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2903" for this suite. • [SLOW TEST:105.365 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":303,"completed":289,"skipped":4755,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 01:01:50.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 01:01:51.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7131" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":303,"completed":290,"skipped":4757,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 01:01:51.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-ac46e7fb-c5ed-43ca-b88a-ad03bbac6eae STEP: Creating a pod to test consume configMaps Aug 11 01:01:52.277: INFO: Waiting up to 5m0s for pod "pod-configmaps-50d4266c-1e6b-4507-a1e7-16d0209f610e" in namespace "configmap-2035" to be "Succeeded or Failed" Aug 11 01:01:52.335: INFO: Pod "pod-configmaps-50d4266c-1e6b-4507-a1e7-16d0209f610e": Phase="Pending", Reason="", readiness=false. Elapsed: 57.614747ms Aug 11 01:01:54.340: INFO: Pod "pod-configmaps-50d4266c-1e6b-4507-a1e7-16d0209f610e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062578551s Aug 11 01:01:56.364: INFO: Pod "pod-configmaps-50d4266c-1e6b-4507-a1e7-16d0209f610e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08622729s STEP: Saw pod success Aug 11 01:01:56.364: INFO: Pod "pod-configmaps-50d4266c-1e6b-4507-a1e7-16d0209f610e" satisfied condition "Succeeded or Failed" Aug 11 01:01:56.398: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-50d4266c-1e6b-4507-a1e7-16d0209f610e container configmap-volume-test: STEP: delete the pod Aug 11 01:01:56.521: INFO: Waiting for pod pod-configmaps-50d4266c-1e6b-4507-a1e7-16d0209f610e to disappear Aug 11 01:01:56.581: INFO: Pod pod-configmaps-50d4266c-1e6b-4507-a1e7-16d0209f610e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 01:01:56.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2035" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":291,"skipped":4759,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 01:01:56.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 11 01:02:01.553: INFO: Successfully updated pod "labelsupdate8344271c-943c-4a27-82d5-ca1bdc43a9be" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 01:02:05.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7578" for this suite. • [SLOW TEST:8.771 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":292,"skipped":4764,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 01:02:05.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7792.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7792.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 11 01:02:11.757: INFO: DNS probes using dns-7792/dns-test-4365c92b-f4d2-41ba-982a-5c846d9fc726 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 01:02:11.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7792" for this suite. • [SLOW TEST:6.282 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":303,"completed":293,"skipped":4784,"failed":0} SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 01:02:11.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-pgxqr in namespace proxy-7236 I0811 01:02:12.351264 7 runners.go:190] Created replication controller with name: proxy-service-pgxqr, namespace: proxy-7236, replica count: 1 I0811 01:02:13.401718 7 runners.go:190] proxy-service-pgxqr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0811 01:02:14.401936 7 runners.go:190] proxy-service-pgxqr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0811 01:02:15.402147 7 runners.go:190] proxy-service-pgxqr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0811 01:02:16.402410 7 runners.go:190] proxy-service-pgxqr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0811 01:02:17.402654 7 runners.go:190] proxy-service-pgxqr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0811 01:02:18.402918 7 runners.go:190] proxy-service-pgxqr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0811 01:02:19.403152 7 runners.go:190] proxy-service-pgxqr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0811 01:02:20.403399 7 runners.go:190] proxy-service-pgxqr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0811 01:02:21.403625 7 runners.go:190] proxy-service-pgxqr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0811 01:02:22.403854 7 runners.go:190] proxy-service-pgxqr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0811 01:02:23.404124 7 runners.go:190] proxy-service-pgxqr Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 11 01:02:23.408: INFO: setup took 11.255098488s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Aug 11 01:02:23.415: INFO: (0) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr/proxy/: test (200; 7.320121ms) Aug 11 01:02:23.419: INFO: (0) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 10.636591ms) Aug 11 01:02:23.419: INFO: (0) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 10.670634ms) Aug 11 01:02:23.419: INFO: (0) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 10.797327ms) Aug 11 01:02:23.419: INFO: (0) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 11.332696ms) Aug 11 01:02:23.419: INFO: (0) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:1080/proxy/: test<... (200; 11.486863ms) Aug 11 01:02:23.420: INFO: (0) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:1080/proxy/: ... (200; 11.600687ms) Aug 11 01:02:23.422: INFO: (0) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname2/proxy/: bar (200; 13.905281ms) Aug 11 01:02:23.423: INFO: (0) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname1/proxy/: foo (200; 15.085176ms) Aug 11 01:02:23.425: INFO: (0) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname2/proxy/: bar (200; 17.222927ms) Aug 11 01:02:23.425: INFO: (0) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname2/proxy/: tls qux (200; 17.318129ms) Aug 11 01:02:23.426: INFO: (0) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:460/proxy/: tls baz (200; 17.469066ms) Aug 11 01:02:23.426: INFO: (0) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:462/proxy/: tls qux (200; 17.639714ms) Aug 11 01:02:23.426: INFO: (0) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname1/proxy/: tls baz (200; 17.666185ms) Aug 11 01:02:23.426: INFO: (0) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname1/proxy/: foo (200; 17.827911ms) Aug 11 01:02:23.427: INFO: (0) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:443/proxy/: test (200; 3.70168ms) Aug 11 01:02:23.433: INFO: (1) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 5.159148ms) Aug 11 01:02:23.434: INFO: (1) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 6.415074ms) Aug 11 01:02:23.434: INFO: (1) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:460/proxy/: tls baz (200; 6.706855ms) Aug 11 01:02:23.434: INFO: (1) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:443/proxy/: test<... (200; 7.382185ms) Aug 11 01:02:23.435: INFO: (1) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:462/proxy/: tls qux (200; 7.39412ms) Aug 11 01:02:23.435: INFO: (1) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:1080/proxy/: ... (200; 7.535193ms) Aug 11 01:02:23.436: INFO: (1) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname2/proxy/: bar (200; 8.109019ms) Aug 11 01:02:23.436: INFO: (1) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname1/proxy/: foo (200; 8.095406ms) Aug 11 01:02:23.436: INFO: (1) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname2/proxy/: tls qux (200; 8.175848ms) Aug 11 01:02:23.436: INFO: (1) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname2/proxy/: bar (200; 8.319917ms) Aug 11 01:02:23.436: INFO: (1) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname1/proxy/: foo (200; 8.61223ms) Aug 11 01:02:23.436: INFO: (1) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname1/proxy/: tls baz (200; 8.888976ms) Aug 11 01:02:23.439: INFO: (2) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr/proxy/: test (200; 2.654046ms) Aug 11 01:02:23.440: INFO: (2) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:1080/proxy/: test<... (200; 3.865699ms) Aug 11 01:02:23.441: INFO: (2) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:462/proxy/: tls qux (200; 4.174492ms) Aug 11 01:02:23.441: INFO: (2) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 4.367158ms) Aug 11 01:02:23.441: INFO: (2) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:443/proxy/: ... (200; 5.443119ms) Aug 11 01:02:23.444: INFO: (2) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname2/proxy/: tls qux (200; 7.176336ms) Aug 11 01:02:23.444: INFO: (2) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname1/proxy/: foo (200; 7.096721ms) Aug 11 01:02:23.444: INFO: (2) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname1/proxy/: foo (200; 7.133277ms) Aug 11 01:02:23.444: INFO: (2) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname1/proxy/: tls baz (200; 7.144857ms) Aug 11 01:02:23.444: INFO: (2) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname2/proxy/: bar (200; 7.168735ms) Aug 11 01:02:23.444: INFO: (2) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname2/proxy/: bar (200; 7.186028ms) Aug 11 01:02:23.448: INFO: (3) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 4.06179ms) Aug 11 01:02:23.448: INFO: (3) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:460/proxy/: tls baz (200; 4.115702ms) Aug 11 01:02:23.448: INFO: (3) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:462/proxy/: tls qux (200; 4.245316ms) Aug 11 01:02:23.448: INFO: (3) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:1080/proxy/: test<... (200; 4.240021ms) Aug 11 01:02:23.449: INFO: (3) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 4.772381ms) Aug 11 01:02:23.449: INFO: (3) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:443/proxy/: test (200; 5.347145ms) Aug 11 01:02:23.449: INFO: (3) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname2/proxy/: bar (200; 5.372737ms) Aug 11 01:02:23.449: INFO: (3) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname1/proxy/: foo (200; 5.402659ms) Aug 11 01:02:23.449: INFO: (3) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname1/proxy/: foo (200; 5.447571ms) Aug 11 01:02:23.449: INFO: (3) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname2/proxy/: bar (200; 5.46352ms) Aug 11 01:02:23.449: INFO: (3) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 5.540579ms) Aug 11 01:02:23.449: INFO: (3) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:1080/proxy/: ... (200; 5.595115ms) Aug 11 01:02:23.450: INFO: (3) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 5.895517ms) Aug 11 01:02:23.450: INFO: (3) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname2/proxy/: tls qux (200; 5.88597ms) Aug 11 01:02:23.450: INFO: (3) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname1/proxy/: tls baz (200; 6.358566ms) Aug 11 01:02:23.453: INFO: (4) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:462/proxy/: tls qux (200; 2.691124ms) Aug 11 01:02:23.456: INFO: (4) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:1080/proxy/: test<... (200; 6.028037ms) Aug 11 01:02:23.456: INFO: (4) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname2/proxy/: bar (200; 6.086236ms) Aug 11 01:02:23.456: INFO: (4) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 5.968757ms) Aug 11 01:02:23.456: INFO: (4) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname1/proxy/: foo (200; 6.0878ms) Aug 11 01:02:23.456: INFO: (4) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:443/proxy/: ... (200; 6.036473ms) Aug 11 01:02:23.456: INFO: (4) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 6.082685ms) Aug 11 01:02:23.456: INFO: (4) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr/proxy/: test (200; 6.101611ms) Aug 11 01:02:23.456: INFO: (4) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:460/proxy/: tls baz (200; 6.146099ms) Aug 11 01:02:23.457: INFO: (4) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname2/proxy/: bar (200; 7.054192ms) Aug 11 01:02:23.458: INFO: (4) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 7.116143ms) Aug 11 01:02:23.458: INFO: (4) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname1/proxy/: tls baz (200; 7.229064ms) Aug 11 01:02:23.458: INFO: (4) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname1/proxy/: foo (200; 7.253495ms) Aug 11 01:02:23.458: INFO: (4) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname2/proxy/: tls qux (200; 7.260822ms) Aug 11 01:02:23.468: INFO: (5) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr/proxy/: test (200; 9.386455ms) Aug 11 01:02:23.468: INFO: (5) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 9.704947ms) Aug 11 01:02:23.468: INFO: (5) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:443/proxy/: ... (200; 9.786654ms) Aug 11 01:02:23.468: INFO: (5) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:1080/proxy/: test<... (200; 9.826696ms) Aug 11 01:02:23.468: INFO: (5) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 10.090392ms) Aug 11 01:02:23.468: INFO: (5) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:460/proxy/: tls baz (200; 10.407729ms) Aug 11 01:02:23.468: INFO: (5) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 10.323455ms) Aug 11 01:02:23.468: INFO: (5) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 10.480194ms) Aug 11 01:02:23.474: INFO: (5) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname1/proxy/: foo (200; 15.610031ms) Aug 11 01:02:23.474: INFO: (5) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname1/proxy/: tls baz (200; 15.946888ms) Aug 11 01:02:23.474: INFO: (5) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname1/proxy/: foo (200; 15.858358ms) Aug 11 01:02:23.474: INFO: (5) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname2/proxy/: tls qux (200; 15.698527ms) Aug 11 01:02:23.474: INFO: (5) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname2/proxy/: bar (200; 16.044723ms) Aug 11 01:02:23.474: INFO: (5) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname2/proxy/: bar (200; 15.796795ms) Aug 11 01:02:23.478: INFO: (6) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr/proxy/: test (200; 4.022367ms) Aug 11 01:02:23.479: INFO: (6) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:1080/proxy/: test<... (200; 4.05805ms) Aug 11 01:02:23.479: INFO: (6) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:1080/proxy/: ... (200; 4.173659ms) Aug 11 01:02:23.479: INFO: (6) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname1/proxy/: foo (200; 4.24305ms) Aug 11 01:02:23.479: INFO: (6) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:443/proxy/: test (200; 3.615882ms) Aug 11 01:02:23.484: INFO: (7) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:1080/proxy/: test<... (200; 3.587648ms) Aug 11 01:02:23.484: INFO: (7) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:460/proxy/: tls baz (200; 3.670625ms) Aug 11 01:02:23.485: INFO: (7) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname2/proxy/: tls qux (200; 5.071702ms) Aug 11 01:02:23.485: INFO: (7) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 5.289349ms) Aug 11 01:02:23.485: INFO: (7) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 5.351759ms) Aug 11 01:02:23.485: INFO: (7) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:462/proxy/: tls qux (200; 5.392895ms) Aug 11 01:02:23.485: INFO: (7) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname1/proxy/: foo (200; 5.460497ms) Aug 11 01:02:23.486: INFO: (7) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:1080/proxy/: ... (200; 5.561247ms) Aug 11 01:02:23.486: INFO: (7) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 5.75501ms) Aug 11 01:02:23.486: INFO: (7) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname2/proxy/: bar (200; 5.837024ms) Aug 11 01:02:23.486: INFO: (7) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname1/proxy/: foo (200; 6.057872ms) Aug 11 01:02:23.486: INFO: (7) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname2/proxy/: bar (200; 6.04062ms) Aug 11 01:02:23.486: INFO: (7) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname1/proxy/: tls baz (200; 6.136546ms) Aug 11 01:02:23.488: INFO: (8) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:443/proxy/: test<... (200; 4.118122ms) Aug 11 01:02:23.490: INFO: (8) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:1080/proxy/: ... (200; 4.120987ms) Aug 11 01:02:23.490: INFO: (8) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 4.338925ms) Aug 11 01:02:23.491: INFO: (8) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname2/proxy/: tls qux (200; 4.446406ms) Aug 11 01:02:23.491: INFO: (8) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 4.639418ms) Aug 11 01:02:23.491: INFO: (8) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname2/proxy/: bar (200; 4.64638ms) Aug 11 01:02:23.491: INFO: (8) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname1/proxy/: foo (200; 4.790063ms) Aug 11 01:02:23.491: INFO: (8) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname2/proxy/: bar (200; 4.719861ms) Aug 11 01:02:23.491: INFO: (8) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname1/proxy/: tls baz (200; 4.747148ms) Aug 11 01:02:23.491: INFO: (8) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr/proxy/: test (200; 4.733322ms) Aug 11 01:02:23.491: INFO: (8) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:462/proxy/: tls qux (200; 4.770984ms) Aug 11 01:02:23.491: INFO: (8) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:460/proxy/: tls baz (200; 4.789902ms) Aug 11 01:02:23.491: INFO: (8) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname1/proxy/: foo (200; 4.849115ms) Aug 11 01:02:23.491: INFO: (8) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 5.295257ms) Aug 11 01:02:23.496: INFO: (9) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname1/proxy/: tls baz (200; 4.022573ms) Aug 11 01:02:23.496: INFO: (9) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname2/proxy/: bar (200; 4.08898ms) Aug 11 01:02:23.496: INFO: (9) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname1/proxy/: foo (200; 4.088407ms) Aug 11 01:02:23.496: INFO: (9) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname2/proxy/: bar (200; 4.078007ms) Aug 11 01:02:23.496: INFO: (9) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:460/proxy/: tls baz (200; 4.036484ms) Aug 11 01:02:23.496: INFO: (9) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 4.23869ms) Aug 11 01:02:23.496: INFO: (9) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:1080/proxy/: ... (200; 4.233122ms) Aug 11 01:02:23.496: INFO: (9) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr/proxy/: test (200; 4.270918ms) Aug 11 01:02:23.496: INFO: (9) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname1/proxy/: foo (200; 4.335384ms) Aug 11 01:02:23.496: INFO: (9) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname2/proxy/: tls qux (200; 4.381302ms) Aug 11 01:02:23.496: INFO: (9) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:443/proxy/: test<... (200; 4.421304ms) Aug 11 01:02:23.496: INFO: (9) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 4.466218ms) Aug 11 01:02:23.496: INFO: (9) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:462/proxy/: tls qux (200; 4.529699ms) Aug 11 01:02:23.499: INFO: (10) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname2/proxy/: bar (200; 3.241938ms) Aug 11 01:02:23.500: INFO: (10) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:462/proxy/: tls qux (200; 3.585518ms) Aug 11 01:02:23.500: INFO: (10) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname1/proxy/: foo (200; 3.553816ms) Aug 11 01:02:23.500: INFO: (10) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname2/proxy/: tls qux (200; 4.008758ms) Aug 11 01:02:23.500: INFO: (10) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 4.041007ms) Aug 11 01:02:23.500: INFO: (10) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname1/proxy/: foo (200; 4.049258ms) Aug 11 01:02:23.500: INFO: (10) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname2/proxy/: bar (200; 4.210993ms) Aug 11 01:02:23.500: INFO: (10) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname1/proxy/: tls baz (200; 4.189685ms) Aug 11 01:02:23.500: INFO: (10) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:1080/proxy/: test<... (200; 4.278503ms) Aug 11 01:02:23.501: INFO: (10) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 4.642866ms) Aug 11 01:02:23.501: INFO: (10) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 4.711128ms) Aug 11 01:02:23.501: INFO: (10) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 4.725379ms) Aug 11 01:02:23.501: INFO: (10) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:460/proxy/: tls baz (200; 4.752295ms) Aug 11 01:02:23.501: INFO: (10) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr/proxy/: test (200; 4.70359ms) Aug 11 01:02:23.501: INFO: (10) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:1080/proxy/: ... (200; 4.685485ms) Aug 11 01:02:23.501: INFO: (10) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:443/proxy/: test<... (200; 4.378084ms) Aug 11 01:02:23.506: INFO: (11) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 4.813971ms) Aug 11 01:02:23.506: INFO: (11) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:462/proxy/: tls qux (200; 4.781628ms) Aug 11 01:02:23.506: INFO: (11) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 4.809314ms) Aug 11 01:02:23.506: INFO: (11) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr/proxy/: test (200; 4.859115ms) Aug 11 01:02:23.506: INFO: (11) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:443/proxy/: ... (200; 4.917693ms) Aug 11 01:02:23.508: INFO: (12) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:462/proxy/: tls qux (200; 2.274137ms) Aug 11 01:02:23.508: INFO: (12) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 2.435743ms) Aug 11 01:02:23.509: INFO: (12) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:1080/proxy/: ... (200; 2.775137ms) Aug 11 01:02:23.510: INFO: (12) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname1/proxy/: foo (200; 4.256488ms) Aug 11 01:02:23.511: INFO: (12) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname1/proxy/: tls baz (200; 4.732227ms) Aug 11 01:02:23.511: INFO: (12) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname1/proxy/: foo (200; 4.785348ms) Aug 11 01:02:23.511: INFO: (12) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:443/proxy/: test (200; 4.836461ms) Aug 11 01:02:23.511: INFO: (12) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname2/proxy/: bar (200; 4.88654ms) Aug 11 01:02:23.511: INFO: (12) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 4.842708ms) Aug 11 01:02:23.511: INFO: (12) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 4.893832ms) Aug 11 01:02:23.511: INFO: (12) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 4.821516ms) Aug 11 01:02:23.511: INFO: (12) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname2/proxy/: bar (200; 4.902496ms) Aug 11 01:02:23.511: INFO: (12) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:1080/proxy/: test<... (200; 4.827426ms) Aug 11 01:02:23.511: INFO: (12) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname2/proxy/: tls qux (200; 4.857613ms) Aug 11 01:02:23.516: INFO: (13) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 4.622056ms) Aug 11 01:02:23.516: INFO: (13) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname1/proxy/: foo (200; 4.581232ms) Aug 11 01:02:23.517: INFO: (13) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname1/proxy/: tls baz (200; 5.691437ms) Aug 11 01:02:23.517: INFO: (13) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname1/proxy/: foo (200; 5.625901ms) Aug 11 01:02:23.517: INFO: (13) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname2/proxy/: tls qux (200; 5.67508ms) Aug 11 01:02:23.517: INFO: (13) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname2/proxy/: bar (200; 5.789869ms) Aug 11 01:02:23.517: INFO: (13) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr/proxy/: test (200; 5.855138ms) Aug 11 01:02:23.517: INFO: (13) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:460/proxy/: tls baz (200; 5.945386ms) Aug 11 01:02:23.517: INFO: (13) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 5.877071ms) Aug 11 01:02:23.517: INFO: (13) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 5.928088ms) Aug 11 01:02:23.517: INFO: (13) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:462/proxy/: tls qux (200; 6.129888ms) Aug 11 01:02:23.517: INFO: (13) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname2/proxy/: bar (200; 6.112358ms) Aug 11 01:02:23.517: INFO: (13) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:1080/proxy/: ... (200; 6.103904ms) Aug 11 01:02:23.517: INFO: (13) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 6.119499ms) Aug 11 01:02:23.517: INFO: (13) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:1080/proxy/: test<... (200; 6.215113ms) Aug 11 01:02:23.517: INFO: (13) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:443/proxy/: test<... (200; 4.600379ms) Aug 11 01:02:23.522: INFO: (14) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname2/proxy/: bar (200; 4.774925ms) Aug 11 01:02:23.522: INFO: (14) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname2/proxy/: tls qux (200; 4.784653ms) Aug 11 01:02:23.522: INFO: (14) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname2/proxy/: bar (200; 4.814274ms) Aug 11 01:02:23.522: INFO: (14) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr/proxy/: test (200; 4.861634ms) Aug 11 01:02:23.523: INFO: (14) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:460/proxy/: tls baz (200; 5.218285ms) Aug 11 01:02:23.523: INFO: (14) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname1/proxy/: foo (200; 5.293292ms) Aug 11 01:02:23.523: INFO: (14) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 5.360372ms) Aug 11 01:02:23.523: INFO: (14) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:1080/proxy/: ... (200; 5.358638ms) Aug 11 01:02:23.523: INFO: (14) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname1/proxy/: foo (200; 5.321034ms) Aug 11 01:02:23.523: INFO: (14) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname1/proxy/: tls baz (200; 5.466982ms) Aug 11 01:02:23.525: INFO: (15) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr/proxy/: test (200; 2.514832ms) Aug 11 01:02:23.528: INFO: (15) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:1080/proxy/: test<... (200; 4.628213ms) Aug 11 01:02:23.528: INFO: (15) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname1/proxy/: foo (200; 4.888704ms) Aug 11 01:02:23.528: INFO: (15) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname1/proxy/: tls baz (200; 4.968551ms) Aug 11 01:02:23.528: INFO: (15) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname2/proxy/: bar (200; 5.017802ms) Aug 11 01:02:23.528: INFO: (15) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 5.034202ms) Aug 11 01:02:23.528: INFO: (15) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname2/proxy/: tls qux (200; 5.059541ms) Aug 11 01:02:23.528: INFO: (15) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:1080/proxy/: ... (200; 5.415709ms) Aug 11 01:02:23.528: INFO: (15) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname1/proxy/: foo (200; 5.502917ms) Aug 11 01:02:23.528: INFO: (15) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname2/proxy/: bar (200; 5.600229ms) Aug 11 01:02:23.529: INFO: (15) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:443/proxy/: test (200; 3.997513ms) Aug 11 01:02:23.533: INFO: (16) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 3.997875ms) Aug 11 01:02:23.533: INFO: (16) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:1080/proxy/: test<... (200; 4.123422ms) Aug 11 01:02:23.533: INFO: (16) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:1080/proxy/: ... (200; 4.11383ms) Aug 11 01:02:23.533: INFO: (16) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:460/proxy/: tls baz (200; 4.141332ms) Aug 11 01:02:23.533: INFO: (16) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 4.320349ms) Aug 11 01:02:23.533: INFO: (16) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname1/proxy/: tls baz (200; 4.422584ms) Aug 11 01:02:23.534: INFO: (16) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:462/proxy/: tls qux (200; 4.437714ms) Aug 11 01:02:23.534: INFO: (16) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname1/proxy/: foo (200; 5.168815ms) Aug 11 01:02:23.534: INFO: (16) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname2/proxy/: tls qux (200; 5.253183ms) Aug 11 01:02:23.534: INFO: (16) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname2/proxy/: bar (200; 5.315768ms) Aug 11 01:02:23.534: INFO: (16) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname1/proxy/: foo (200; 5.40525ms) Aug 11 01:02:23.535: INFO: (16) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname2/proxy/: bar (200; 5.579528ms) Aug 11 01:02:23.537: INFO: (17) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:1080/proxy/: test<... (200; 1.965205ms) Aug 11 01:02:23.539: INFO: (17) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 3.782691ms) Aug 11 01:02:23.539: INFO: (17) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr/proxy/: test (200; 3.741564ms) Aug 11 01:02:23.539: INFO: (17) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:460/proxy/: tls baz (200; 3.735969ms) Aug 11 01:02:23.539: INFO: (17) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 3.728927ms) Aug 11 01:02:23.539: INFO: (17) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:1080/proxy/: ... (200; 3.792691ms) Aug 11 01:02:23.539: INFO: (17) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:160/proxy/: foo (200; 3.908411ms) Aug 11 01:02:23.539: INFO: (17) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 3.913869ms) Aug 11 01:02:23.539: INFO: (17) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:443/proxy/: test (200; 3.741304ms) Aug 11 01:02:23.545: INFO: (18) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:460/proxy/: tls baz (200; 3.806964ms) Aug 11 01:02:23.545: INFO: (18) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr:1080/proxy/: test<... (200; 3.9884ms) Aug 11 01:02:23.545: INFO: (18) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:1080/proxy/: ... (200; 4.099397ms) Aug 11 01:02:23.545: INFO: (18) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:443/proxy/: ... (200; 9.061295ms) Aug 11 01:02:23.556: INFO: (19) /api/v1/namespaces/proxy-7236/pods/https:proxy-service-pgxqr-ptrrr:443/proxy/: test<... (200; 9.889059ms) Aug 11 01:02:23.557: INFO: (19) /api/v1/namespaces/proxy-7236/pods/proxy-service-pgxqr-ptrrr/proxy/: test (200; 9.850231ms) Aug 11 01:02:23.557: INFO: (19) /api/v1/namespaces/proxy-7236/pods/http:proxy-service-pgxqr-ptrrr:162/proxy/: bar (200; 9.849458ms) Aug 11 01:02:23.558: INFO: (19) /api/v1/namespaces/proxy-7236/services/http:proxy-service-pgxqr:portname1/proxy/: foo (200; 10.737559ms) Aug 11 01:02:23.558: INFO: (19) /api/v1/namespaces/proxy-7236/services/https:proxy-service-pgxqr:tlsportname2/proxy/: tls qux (200; 10.766396ms) Aug 11 01:02:23.558: INFO: (19) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname2/proxy/: bar (200; 10.749558ms) Aug 11 01:02:23.558: INFO: (19) /api/v1/namespaces/proxy-7236/services/proxy-service-pgxqr:portname1/proxy/: foo (200; 10.790834ms) STEP: deleting ReplicationController proxy-service-pgxqr in namespace proxy-7236, will wait for the garbage collector to delete the pods Aug 11 01:02:23.616: INFO: Deleting ReplicationController proxy-service-pgxqr took: 6.952436ms Aug 11 01:02:24.017: INFO: Terminating ReplicationController proxy-service-pgxqr pods took: 400.171549ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 01:02:26.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7236" for this suite. • [SLOW TEST:14.638 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":303,"completed":294,"skipped":4790,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 01:02:26.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 11 01:02:26.625: INFO: Waiting up to 5m0s for pod "pod-ac66ffc2-51a5-4e7d-89bf-c11558875ab0" in namespace "emptydir-2671" to be "Succeeded or Failed" Aug 11 01:02:26.636: INFO: Pod "pod-ac66ffc2-51a5-4e7d-89bf-c11558875ab0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.64096ms Aug 11 01:02:28.640: INFO: Pod "pod-ac66ffc2-51a5-4e7d-89bf-c11558875ab0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014601875s Aug 11 01:02:30.644: INFO: Pod "pod-ac66ffc2-51a5-4e7d-89bf-c11558875ab0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018377341s STEP: Saw pod success Aug 11 01:02:30.644: INFO: Pod "pod-ac66ffc2-51a5-4e7d-89bf-c11558875ab0" satisfied condition "Succeeded or Failed" Aug 11 01:02:30.646: INFO: Trying to get logs from node latest-worker2 pod pod-ac66ffc2-51a5-4e7d-89bf-c11558875ab0 container test-container: STEP: delete the pod Aug 11 01:02:30.762: INFO: Waiting for pod pod-ac66ffc2-51a5-4e7d-89bf-c11558875ab0 to disappear Aug 11 01:02:30.798: INFO: Pod pod-ac66ffc2-51a5-4e7d-89bf-c11558875ab0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 01:02:30.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2671" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":295,"skipped":4795,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 01:02:30.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Aug 11 01:02:30.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4071' Aug 11 01:02:31.328: INFO: stderr: "" Aug 11 01:02:31.328: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 11 01:02:31.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4071' Aug 11 01:02:31.498: INFO: stderr: "" Aug 11 01:02:31.498: INFO: stdout: "update-demo-nautilus-6t9gb update-demo-nautilus-nqzv6 " Aug 11 01:02:31.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6t9gb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4071' Aug 11 01:02:31.626: INFO: stderr: "" Aug 11 01:02:31.626: INFO: stdout: "" Aug 11 01:02:31.626: INFO: update-demo-nautilus-6t9gb is created but not running Aug 11 01:02:36.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4071' Aug 11 01:02:36.744: INFO: stderr: "" Aug 11 01:02:36.744: INFO: stdout: "update-demo-nautilus-6t9gb update-demo-nautilus-nqzv6 " Aug 11 01:02:36.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6t9gb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4071' Aug 11 01:02:36.837: INFO: stderr: "" Aug 11 01:02:36.837: INFO: stdout: "true" Aug 11 01:02:36.837: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6t9gb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4071' Aug 11 01:02:36.941: INFO: stderr: "" Aug 11 01:02:36.941: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 11 01:02:36.941: INFO: validating pod update-demo-nautilus-6t9gb Aug 11 01:02:36.946: INFO: got data: { "image": "nautilus.jpg" } Aug 11 01:02:36.946: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 11 01:02:36.947: INFO: update-demo-nautilus-6t9gb is verified up and running Aug 11 01:02:36.947: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqzv6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4071' Aug 11 01:02:37.054: INFO: stderr: "" Aug 11 01:02:37.054: INFO: stdout: "true" Aug 11 01:02:37.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqzv6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4071' Aug 11 01:02:37.174: INFO: stderr: "" Aug 11 01:02:37.174: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 11 01:02:37.174: INFO: validating pod update-demo-nautilus-nqzv6 Aug 11 01:02:37.179: INFO: got data: { "image": "nautilus.jpg" } Aug 11 01:02:37.179: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 11 01:02:37.179: INFO: update-demo-nautilus-nqzv6 is verified up and running STEP: scaling down the replication controller Aug 11 01:02:37.182: INFO: scanned /root for discovery docs: Aug 11 01:02:37.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4071' Aug 11 01:02:38.344: INFO: stderr: "" Aug 11 01:02:38.344: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 11 01:02:38.344: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4071' Aug 11 01:02:38.450: INFO: stderr: "" Aug 11 01:02:38.450: INFO: stdout: "update-demo-nautilus-6t9gb update-demo-nautilus-nqzv6 " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 11 01:02:43.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4071' Aug 11 01:02:43.553: INFO: stderr: "" Aug 11 01:02:43.553: INFO: stdout: "update-demo-nautilus-6t9gb update-demo-nautilus-nqzv6 " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 11 01:02:48.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4071' Aug 11 01:02:48.652: INFO: stderr: "" Aug 11 01:02:48.652: INFO: stdout: "update-demo-nautilus-nqzv6 " Aug 11 01:02:48.653: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqzv6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4071' Aug 11 01:02:48.746: INFO: stderr: "" Aug 11 01:02:48.747: INFO: stdout: "true" Aug 11 01:02:48.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqzv6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4071' Aug 11 01:02:48.837: INFO: stderr: "" Aug 11 01:02:48.837: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 11 01:02:48.837: INFO: validating pod update-demo-nautilus-nqzv6 Aug 11 01:02:48.839: INFO: got data: { "image": "nautilus.jpg" } Aug 11 01:02:48.839: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 11 01:02:48.839: INFO: update-demo-nautilus-nqzv6 is verified up and running STEP: scaling up the replication controller Aug 11 01:02:48.841: INFO: scanned /root for discovery docs: Aug 11 01:02:48.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4071' Aug 11 01:02:50.127: INFO: stderr: "" Aug 11 01:02:50.127: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 11 01:02:50.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4071' Aug 11 01:02:50.253: INFO: stderr: "" Aug 11 01:02:50.253: INFO: stdout: "update-demo-nautilus-nqzv6 update-demo-nautilus-v8wxm " Aug 11 01:02:50.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqzv6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4071' Aug 11 01:02:50.357: INFO: stderr: "" Aug 11 01:02:50.357: INFO: stdout: "true" Aug 11 01:02:50.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqzv6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4071' Aug 11 01:02:50.462: INFO: stderr: "" Aug 11 01:02:50.462: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 11 01:02:50.462: INFO: validating pod update-demo-nautilus-nqzv6 Aug 11 01:02:50.465: INFO: got data: { "image": "nautilus.jpg" } Aug 11 01:02:50.465: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 11 01:02:50.465: INFO: update-demo-nautilus-nqzv6 is verified up and running Aug 11 01:02:50.465: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v8wxm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4071' Aug 11 01:02:50.583: INFO: stderr: "" Aug 11 01:02:50.583: INFO: stdout: "" Aug 11 01:02:50.583: INFO: update-demo-nautilus-v8wxm is created but not running Aug 11 01:02:55.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4071' Aug 11 01:02:55.704: INFO: stderr: "" Aug 11 01:02:55.704: INFO: stdout: "update-demo-nautilus-nqzv6 update-demo-nautilus-v8wxm " Aug 11 01:02:55.704: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqzv6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4071' Aug 11 01:02:55.807: INFO: stderr: "" Aug 11 01:02:55.807: INFO: stdout: "true" Aug 11 01:02:55.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqzv6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4071' Aug 11 01:02:55.908: INFO: stderr: "" Aug 11 01:02:55.908: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 11 01:02:55.909: INFO: validating pod update-demo-nautilus-nqzv6 Aug 11 01:02:55.912: INFO: got data: { "image": "nautilus.jpg" } Aug 11 01:02:55.912: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 11 01:02:55.912: INFO: update-demo-nautilus-nqzv6 is verified up and running Aug 11 01:02:55.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v8wxm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4071' Aug 11 01:02:56.032: INFO: stderr: "" Aug 11 01:02:56.032: INFO: stdout: "true" Aug 11 01:02:56.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v8wxm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4071' Aug 11 01:02:56.139: INFO: stderr: "" Aug 11 01:02:56.139: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 11 01:02:56.139: INFO: validating pod update-demo-nautilus-v8wxm Aug 11 01:02:56.143: INFO: got data: { "image": "nautilus.jpg" } Aug 11 01:02:56.143: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 11 01:02:56.143: INFO: update-demo-nautilus-v8wxm is verified up and running STEP: using delete to clean up resources Aug 11 01:02:56.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4071' Aug 11 01:02:56.264: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 11 01:02:56.264: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 11 01:02:56.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4071' Aug 11 01:02:56.367: INFO: stderr: "No resources found in kubectl-4071 namespace.\n" Aug 11 01:02:56.367: INFO: stdout: "" Aug 11 01:02:56.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4071 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 11 01:02:56.470: INFO: stderr: "" Aug 11 01:02:56.470: INFO: stdout: "update-demo-nautilus-nqzv6\nupdate-demo-nautilus-v8wxm\n" Aug 11 01:02:56.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4071' Aug 11 01:02:57.089: INFO: stderr: "No resources found in kubectl-4071 namespace.\n" Aug 11 01:02:57.089: INFO: stdout: "" Aug 11 01:02:57.089: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4071 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 11 01:02:57.334: INFO: stderr: "" Aug 11 01:02:57.334: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 01:02:57.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4071" for this suite. • [SLOW TEST:26.535 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":303,"completed":296,"skipped":4811,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 01:02:57.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-54 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Aug 11 01:02:57.713: INFO: Found 0 stateful pods, waiting for 3 Aug 11 01:03:07.718: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 11 01:03:07.718: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 11 01:03:07.719: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 11 01:03:17.718: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 11 01:03:17.718: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 11 01:03:17.718: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 11 01:03:17.746: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Aug 11 01:03:27.809: INFO: Updating stateful set ss2 Aug 11 01:03:27.849: INFO: Waiting for Pod statefulset-54/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Aug 11 01:03:38.502: INFO: Found 2 stateful pods, waiting for 3 Aug 11 01:03:48.506: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 11 01:03:48.506: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 11 01:03:48.506: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Aug 11 01:03:48.531: INFO: Updating stateful set ss2 Aug 11 01:03:48.618: INFO: Waiting for Pod statefulset-54/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 11 01:03:58.643: INFO: Updating stateful set ss2 Aug 11 01:03:58.772: INFO: Waiting for StatefulSet statefulset-54/ss2 to complete update Aug 11 01:03:58.772: INFO: Waiting for Pod statefulset-54/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 11 01:04:08.781: INFO: Deleting all statefulset in ns statefulset-54 Aug 11 01:04:08.784: INFO: Scaling statefulset ss2 to 0 Aug 11 01:04:38.863: INFO: Waiting for statefulset status.replicas updated to 0 Aug 11 01:04:38.866: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 01:04:38.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-54" for this suite. • [SLOW TEST:101.599 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":303,"completed":297,"skipped":4811,"failed":0} SSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 01:04:38.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 01:04:39.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9389" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":298,"skipped":4814,"failed":0} ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 01:04:39.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 11 01:04:43.731: INFO: Successfully updated pod "annotationupdate039e4b84-d2d9-4416-b064-7d695c9b0ccd" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 01:04:45.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7544" for this suite. • [SLOW TEST:6.702 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":299,"skipped":4814,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 01:04:45.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Aug 11 01:04:45.835: INFO: Waiting up to 5m0s for pod "client-containers-afc5d801-7bf9-4ec4-ad92-73a55e240257" in namespace "containers-9328" to be "Succeeded or Failed" Aug 11 01:04:45.838: INFO: Pod "client-containers-afc5d801-7bf9-4ec4-ad92-73a55e240257": Phase="Pending", Reason="", readiness=false. Elapsed: 3.161123ms Aug 11 01:04:47.842: INFO: Pod "client-containers-afc5d801-7bf9-4ec4-ad92-73a55e240257": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00712059s Aug 11 01:04:49.847: INFO: Pod "client-containers-afc5d801-7bf9-4ec4-ad92-73a55e240257": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011923299s STEP: Saw pod success Aug 11 01:04:49.847: INFO: Pod "client-containers-afc5d801-7bf9-4ec4-ad92-73a55e240257" satisfied condition "Succeeded or Failed" Aug 11 01:04:49.849: INFO: Trying to get logs from node latest-worker2 pod client-containers-afc5d801-7bf9-4ec4-ad92-73a55e240257 container test-container: STEP: delete the pod Aug 11 01:04:49.903: INFO: Waiting for pod client-containers-afc5d801-7bf9-4ec4-ad92-73a55e240257 to disappear Aug 11 01:04:49.921: INFO: Pod client-containers-afc5d801-7bf9-4ec4-ad92-73a55e240257 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 01:04:49.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9328" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":303,"completed":300,"skipped":4850,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 01:04:49.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 11 01:04:50.036: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 01:04:50.041: INFO: Number of nodes with available pods: 0 Aug 11 01:04:50.041: INFO: Node latest-worker is running more than one daemon pod Aug 11 01:04:51.046: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 01:04:51.049: INFO: Number of nodes with available pods: 0 Aug 11 01:04:51.049: INFO: Node latest-worker is running more than one daemon pod Aug 11 01:04:52.117: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 01:04:52.120: INFO: Number of nodes with available pods: 0 Aug 11 01:04:52.120: INFO: Node latest-worker is running more than one daemon pod Aug 11 01:04:53.045: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 01:04:53.049: INFO: Number of nodes with available pods: 0 Aug 11 01:04:53.049: INFO: Node latest-worker is running more than one daemon pod Aug 11 01:04:54.046: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 01:04:54.050: INFO: Number of nodes with available pods: 0 Aug 11 01:04:54.050: INFO: Node latest-worker is running more than one daemon pod Aug 11 01:04:55.045: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 01:04:55.049: INFO: Number of nodes with available pods: 1 Aug 11 01:04:55.049: INFO: Node latest-worker2 is running more than one daemon pod Aug 11 01:04:56.046: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 01:04:56.049: INFO: Number of nodes with available pods: 2 Aug 11 01:04:56.049: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 11 01:04:56.079: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 01:04:56.091: INFO: Number of nodes with available pods: 2 Aug 11 01:04:56.091: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3955, will wait for the garbage collector to delete the pods Aug 11 01:04:57.174: INFO: Deleting DaemonSet.extensions daemon-set took: 6.85016ms Aug 11 01:04:57.575: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.210561ms Aug 11 01:05:03.279: INFO: Number of nodes with available pods: 0 Aug 11 01:05:03.279: INFO: Number of running nodes: 0, number of available pods: 0 Aug 11 01:05:03.282: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3955/daemonsets","resourceVersion":"6068038"},"items":null} Aug 11 01:05:03.284: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3955/pods","resourceVersion":"6068038"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 01:05:03.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3955" for this suite. • [SLOW TEST:13.373 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":303,"completed":301,"skipped":4877,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 01:05:03.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-d9942761-a6cd-4a99-8e29-ef07fbdbc661 STEP: Creating a pod to test consume secrets Aug 11 01:05:03.421: INFO: Waiting up to 5m0s for pod "pod-secrets-47cb3424-6133-4d95-8228-b89d03ac630d" in namespace "secrets-2019" to be "Succeeded or Failed" Aug 11 01:05:03.424: INFO: Pod "pod-secrets-47cb3424-6133-4d95-8228-b89d03ac630d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.855944ms Aug 11 01:05:05.439: INFO: Pod "pod-secrets-47cb3424-6133-4d95-8228-b89d03ac630d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01817247s Aug 11 01:05:07.443: INFO: Pod "pod-secrets-47cb3424-6133-4d95-8228-b89d03ac630d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022772382s STEP: Saw pod success Aug 11 01:05:07.443: INFO: Pod "pod-secrets-47cb3424-6133-4d95-8228-b89d03ac630d" satisfied condition "Succeeded or Failed" Aug 11 01:05:07.447: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-47cb3424-6133-4d95-8228-b89d03ac630d container secret-volume-test: STEP: delete the pod Aug 11 01:05:07.481: INFO: Waiting for pod pod-secrets-47cb3424-6133-4d95-8228-b89d03ac630d to disappear Aug 11 01:05:07.517: INFO: Pod pod-secrets-47cb3424-6133-4d95-8228-b89d03ac630d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 01:05:07.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2019" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":302,"skipped":4931,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 11 01:05:07.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 11 01:05:08.475: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 11 01:05:10.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704708, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704708, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704708, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732704708, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 11 01:05:13.532: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 11 01:05:13.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9576" for this suite. STEP: Destroying namespace "webhook-9576-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.549 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":303,"completed":303,"skipped":4935,"failed":0} Aug 11 01:05:14.075: INFO: Running AfterSuite actions on all nodes Aug 11 01:05:14.075: INFO: Running AfterSuite actions on node 1 Aug 11 01:05:14.075: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":303,"completed":303,"skipped":4935,"failed":0} Ran 303 of 5238 Specs in 6290.409 seconds SUCCESS! -- 303 Passed | 0 Failed | 0 Pending | 4935 Skipped PASS